<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Gbooth</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Gbooth"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Gbooth"/>
	<updated>2026-04-09T14:17:49Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_3&amp;diff=18841</id>
		<title>DistOS 2014W Lecture 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_3&amp;diff=18841"/>
		<updated>2014-03-14T21:14:21Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Group 4 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==The Early Internet (Jan. 14)==&lt;br /&gt;
&lt;br /&gt;
* [https://homeostasis.scs.carleton.ca/~soma/distos/2014w/kahn1972-resource.pdf Robert E. Kahn, &amp;quot;Resource-Sharing Computer Communications Networks&amp;quot; (1972)]  [http://dx.doi.org/10.1109/PROC.1972.8911 (DOI)]&lt;br /&gt;
* [https://archive.org/details/ComputerNetworks_TheHeraldsOfResourceSharing Computer Networks: The Heralds of Resource Sharing (1972)] - video&lt;br /&gt;
&lt;br /&gt;
== Questions to consider: ==&lt;br /&gt;
# What were the purposes envisioned for computer networks?  How do those compare with the uses they are put to today?&lt;br /&gt;
# What sort of resources were shared?  What resources are shared today?&lt;br /&gt;
# What network architecture did they envision?  Do we still have the same architecture?&lt;br /&gt;
# What surprised you about this paper?&lt;br /&gt;
# What was unclear?&lt;br /&gt;
&lt;br /&gt;
==Group 1==&lt;br /&gt;
=== Discussion ===&lt;br /&gt;
The video was mostly a summary of Kahn&#039;s paper. It was outlined that process mitigation could be done through different zones of air traffic control. Back then, a &amp;quot;distributed OS&amp;quot; meant something different than we normally think about it now. This is because, when the paper was written, many people would be remotely logging onto a single machine. This type of infrastructure is very much like the cloud infrastructure that we see talk about and see today.&lt;br /&gt;
&lt;br /&gt;
The Alto paper referenced Kahn&#039;s paper, and the Alto designers had the foresight to see that networks such as ARPANET would be necessary. However, there are still some questions that come up in discussion, such as:&lt;br /&gt;
* Would it be useful to have a co-processor responsible for maintaining shared resources even today? Would this be like the IMPs of ARPANET? &lt;br /&gt;
Today, computers are usually so fast that it doesn&#039;t really seem to matter. This is still interesting to ruminate on, though.&lt;br /&gt;
&lt;br /&gt;
=== Questions ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What were the purposes envisioned for computer networks?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main purposes envisioned were:&lt;br /&gt;
* Big computation&lt;br /&gt;
* Storage&lt;br /&gt;
* Resource sharing&lt;br /&gt;
Essentially, being able to &amp;quot;have a library on a hard disk&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How do those compare with the uses they are put to today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Today, those things/goals are being done, but we are mostly seeing communication-based things such as instant messaging and email.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What sort of resources were shared?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main resources being shared were databases and CPU time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What resources are shared today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Storage is the main resource being shared today.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What network architecture did they envision?  &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The network architecture would make use of pack-switching. There would be a checksum and acknowledge on each packet and the IMPs were the network interface, and the routers.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Do we still have the same architecture?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Although, packet-switching definitely won, we do not have the same architecture now as the IP doesn&#039;t have a checksum or acknowledge. But TCP does have an end-to-end checksum and acknowledge. Kahn went on to learn fro the errors of ARPANET to design TCP/IP. Also, the job of the network interface and router have now been decoupled.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What surprised you about this paper?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Everything was surprising about this paper. How were they able to dl all of this? A network interface card and router were the size of a fridge! Some general things of note: &lt;br /&gt;
* High-level languages&lt;br /&gt;
* Bootstrapping protocols, bootstrapping applications&lt;br /&gt;
* Primitive computers&lt;br /&gt;
* Desktop publishing&lt;br /&gt;
* The logistics of running a cable from one university to another&lt;br /&gt;
* How old the idea of distributed operating system is&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What was unclear?&#039;&#039;&#039;&lt;br /&gt;
Much of the more technical specifications were unclear, but we mostly skipped over those&lt;br /&gt;
&lt;br /&gt;
==Group 2==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What were the purposes envisioned for computer networks?  How do those compare with the uses they are put to today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main purpose of early networks was resource sharing. Abstractions were used for transmission and message reliability was a by-product. The underlying idea is the same.&lt;br /&gt;
&lt;br /&gt;
Specialized Hardware/software and information sharing. super set of sharing.&lt;br /&gt;
&lt;br /&gt;
The AD-HOC routing was essentially TCP without saying it. Largely unchanged today.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What sort of resources were shared?  What resources are shared today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What network architecture did they envision?  Do we still have the same architecture?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What surprised you about this paper?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What was unclear?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Group 3==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What were the purposes envisioned for computer networks?  How do those compare with the uses they are put to today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The purposes envisioned for computer networks were:&lt;br /&gt;
* Improving reliability of services, due to redundant resource sets&lt;br /&gt;
* Resource sharing&lt;br /&gt;
* Usage modes:t&lt;br /&gt;
** Users can use a remote terminal, from a remote office or home, to access those resources.&lt;br /&gt;
** Would allow centralization of resources, to improve ease of management and do away with inefficiencies&lt;br /&gt;
* Allow specialization of various sites. rather than each site trying to do it all&lt;br /&gt;
* Distributed simulations (notably air traffic control)&lt;br /&gt;
&lt;br /&gt;
Information-sharing is still relevant today, especially in research and large simulations. Remote access has mostly devolved into a specialized need.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What sort of resources were shared?  What resources are shared today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main resources being shared were computing resources (especially expensive mainframes) and data sets.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What network architecture did they envision?  Do we still have the same architecture?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
They envisioned a primitive layered architecture with dedicated routing functions. Some of the various topologies were:&lt;br /&gt;
* star&lt;br /&gt;
* loop&lt;br /&gt;
* bus&lt;br /&gt;
&lt;br /&gt;
It was also primarily (packet | message)-switched. Circuit-switching was too expensive and had large setup times and this didn&#039;t require committing resources. There was also primitive flow control and buffering.&lt;br /&gt;
&lt;br /&gt;
This network architecture predated proper congestion control, such as Van Jacobsen&#039;s slow start. The routing was either Ad-hoc or based on something similar to RIP. They would anticipate elephants and have mice latency issues. Unlike the modern internet, there was error control and retransmission at every step.&lt;br /&gt;
&lt;br /&gt;
The architecture today is similar, but the link-layer is very different: use of Ethernet and ATM. The modern internet is a collection of autonomous systems, not a single network. Routing propagation is now large-scale, and semi-automated (e.g., BGP externally, IS-IS and OSPF internally)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What surprised you about this paper?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What was unclear?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The weird packet format: Page 1400 (4 of PDF): “Node 6, discovering the message is for itself, replaces the destination address by the source address&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==Group 4==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What were the purposes envisioned for computer networks? How do those compare with the uses they are put to today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Networks were envisioned as providing remote access to other computers, because useful resources such as computing power, large databases, and non-portable software were local to a particular computer, not themselves shared over the network.&lt;br /&gt;
&lt;br /&gt;
Today, we use networks mostly for sharing data, although with services like Amazon AWS, we&#039;re starting to share computing resources again.  We&#039;re also moving to support collaboration (e.g. Google Docs, GitHub, etc.).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What sort of resources were shared? What resources are shared today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Computing power was the key resource being shared; today, it&#039;s access to data.  (See above.)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What network architecture did they envision? Do we still have the same architecture?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Surprisingly, yes: modern networks have substantially similar architectures to the ones described in these papers.  &lt;br /&gt;
&lt;br /&gt;
Packet-switched networks are now ubiquitous.  We no longer bother with circuit-switching even for telephony, in contrast to the assumption that non-network data would continue to use the circuit-switched common-carrier network.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What surprised you about this paper?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We were surprised by the accuracy of the predictions given how early the paper was written — even things like electronic banking.  Also surprising were technological advances since the paper was written, such as data transfer speeds (we have networks that are faster than the integrated bus in the Alto), and the predicted resolution requirements (which we are nowhere near meeting).  The amount of detail in the description of the &#039;mouse pointing device&#039; was interesting too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What was unclear? &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Nothing significant; we&#039;re looking at these with the benefit of hindsight.&lt;br /&gt;
&lt;br /&gt;
==Summary of the discussion from lecture==&lt;br /&gt;
Anil&#039;s view is that even these days we can imagine Computer Networks as more of a resource sharing platform. For example when we access the web or search Google we are making use of the resource sharing facilitated by the Internet(Network of interconnected Computer Networks). It&#039;s not possible to put 20,000 computers in our basements’, instead the Internet facilitates access to computing power/databases which are built of hundred thousands of computers. In fact Google and other popular search engines has a local copy of the entire web in their data centers, centralized copy of a large distributed system. Kind of a contradictory phenomenon if you think about in terms of the design goals of the distributed system. &lt;br /&gt;
&lt;br /&gt;
Another important takeaway from the discussion was the point that &amp;quot;Early to market/ first player&amp;quot; with a new product/solution to a niche problem and the one which offer solutions based on simple mechanisms as opposed to one relying on complex mechanism gets adopted faster. Classic example is the Internet. ARPANET which was supposed to be an academic research project which was based on simple mechanisms, open and first of its kind got adopted widely and evolved in to the Internet as we see it today. It is to note that this approach is not without its own drawbacks example being the security aspects were not factored in while designing the ARPANET since it was intended to be a network between trusted parties, which was fine then. But when ARPANET evolved in to the Internet, security aspect was one area which required a major focus on. In Silicon Valley the focus is on being the &amp;quot;first player&amp;quot; in a niche market to meet that objective often simple framework/mechanisms are used. In doing so there is also a possibility of leaving out some components which can turn out to be a vital missing link, recent example being security flaw in &#039;snapchat&#039; that lead to user data being exposed.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_3&amp;diff=18840</id>
		<title>DistOS 2014W Lecture 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_3&amp;diff=18840"/>
		<updated>2014-03-14T21:13:31Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Group 3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==The Early Internet (Jan. 14)==&lt;br /&gt;
&lt;br /&gt;
* [https://homeostasis.scs.carleton.ca/~soma/distos/2014w/kahn1972-resource.pdf Robert E. Kahn, &amp;quot;Resource-Sharing Computer Communications Networks&amp;quot; (1972)]  [http://dx.doi.org/10.1109/PROC.1972.8911 (DOI)]&lt;br /&gt;
* [https://archive.org/details/ComputerNetworks_TheHeraldsOfResourceSharing Computer Networks: The Heralds of Resource Sharing (1972)] - video&lt;br /&gt;
&lt;br /&gt;
== Questions to consider: ==&lt;br /&gt;
# What were the purposes envisioned for computer networks?  How do those compare with the uses they are put to today?&lt;br /&gt;
# What sort of resources were shared?  What resources are shared today?&lt;br /&gt;
# What network architecture did they envision?  Do we still have the same architecture?&lt;br /&gt;
# What surprised you about this paper?&lt;br /&gt;
# What was unclear?&lt;br /&gt;
&lt;br /&gt;
==Group 1==&lt;br /&gt;
=== Discussion ===&lt;br /&gt;
The video was mostly a summary of Kahn&#039;s paper. It was outlined that process mitigation could be done through different zones of air traffic control. Back then, a &amp;quot;distributed OS&amp;quot; meant something different than we normally think about it now. This is because, when the paper was written, many people would be remotely logging onto a single machine. This type of infrastructure is very much like the cloud infrastructure that we see talk about and see today.&lt;br /&gt;
&lt;br /&gt;
The Alto paper referenced Kahn&#039;s paper, and the Alto designers had the foresight to see that networks such as ARPANET would be necessary. However, there are still some questions that come up in discussion, such as:&lt;br /&gt;
* Would it be useful to have a co-processor responsible for maintaining shared resources even today? Would this be like the IMPs of ARPANET? &lt;br /&gt;
Today, computers are usually so fast that it doesn&#039;t really seem to matter. This is still interesting to ruminate on, though.&lt;br /&gt;
&lt;br /&gt;
=== Questions ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What were the purposes envisioned for computer networks?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main purposes envisioned were:&lt;br /&gt;
* Big computation&lt;br /&gt;
* Storage&lt;br /&gt;
* Resource sharing&lt;br /&gt;
Essentially, being able to &amp;quot;have a library on a hard disk&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How do those compare with the uses they are put to today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Today, those things/goals are being done, but we are mostly seeing communication-based things such as instant messaging and email.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What sort of resources were shared?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main resources being shared were databases and CPU time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What resources are shared today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Storage is the main resource being shared today.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What network architecture did they envision?  &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The network architecture would make use of pack-switching. There would be a checksum and acknowledge on each packet and the IMPs were the network interface, and the routers.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Do we still have the same architecture?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Although, packet-switching definitely won, we do not have the same architecture now as the IP doesn&#039;t have a checksum or acknowledge. But TCP does have an end-to-end checksum and acknowledge. Kahn went on to learn fro the errors of ARPANET to design TCP/IP. Also, the job of the network interface and router have now been decoupled.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What surprised you about this paper?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Everything was surprising about this paper. How were they able to dl all of this? A network interface card and router were the size of a fridge! Some general things of note: &lt;br /&gt;
* High-level languages&lt;br /&gt;
* Bootstrapping protocols, bootstrapping applications&lt;br /&gt;
* Primitive computers&lt;br /&gt;
* Desktop publishing&lt;br /&gt;
* The logistics of running a cable from one university to another&lt;br /&gt;
* How old the idea of distributed operating system is&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What was unclear?&#039;&#039;&#039;&lt;br /&gt;
Much of the more technical specifications were unclear, but we mostly skipped over those&lt;br /&gt;
&lt;br /&gt;
==Group 2==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What were the purposes envisioned for computer networks?  How do those compare with the uses they are put to today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main purpose of early networks was resource sharing. Abstractions were used for transmission and message reliability was a by-product. The underlying idea is the same.&lt;br /&gt;
&lt;br /&gt;
Specialized Hardware/software and information sharing. super set of sharing.&lt;br /&gt;
&lt;br /&gt;
The AD-HOC routing was essentially TCP without saying it. Largely unchanged today.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What sort of resources were shared?  What resources are shared today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What network architecture did they envision?  Do we still have the same architecture?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What surprised you about this paper?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What was unclear?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Group 3==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What were the purposes envisioned for computer networks?  How do those compare with the uses they are put to today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The purposes envisioned for computer networks were:&lt;br /&gt;
* Improving reliability of services, due to redundant resource sets&lt;br /&gt;
* Resource sharing&lt;br /&gt;
* Usage modes:t&lt;br /&gt;
** Users can use a remote terminal, from a remote office or home, to access those resources.&lt;br /&gt;
** Would allow centralization of resources, to improve ease of management and do away with inefficiencies&lt;br /&gt;
* Allow specialization of various sites. rather than each site trying to do it all&lt;br /&gt;
* Distributed simulations (notably air traffic control)&lt;br /&gt;
&lt;br /&gt;
Information-sharing is still relevant today, especially in research and large simulations. Remote access has mostly devolved into a specialized need.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What sort of resources were shared?  What resources are shared today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main resources being shared were computing resources (especially expensive mainframes) and data sets.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What network architecture did they envision?  Do we still have the same architecture?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
They envisioned a primitive layered architecture with dedicated routing functions. Some of the various topologies were:&lt;br /&gt;
* star&lt;br /&gt;
* loop&lt;br /&gt;
* bus&lt;br /&gt;
&lt;br /&gt;
It was also primarily (packet | message)-switched. Circuit-switching was too expensive and had large setup times and this didn&#039;t require committing resources. There was also primitive flow control and buffering.&lt;br /&gt;
&lt;br /&gt;
This network architecture predated proper congestion control, such as Van Jacobsen&#039;s slow start. The routing was either Ad-hoc or based on something similar to RIP. They would anticipate elephants and have mice latency issues. Unlike the modern internet, there was error control and retransmission at every step.&lt;br /&gt;
&lt;br /&gt;
The architecture today is similar, but the link-layer is very different: use of Ethernet and ATM. The modern internet is a collection of autonomous systems, not a single network. Routing propagation is now large-scale, and semi-automated (e.g., BGP externally, IS-IS and OSPF internally)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What surprised you about this paper?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What was unclear?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The weird packet format: Page 1400 (4 of PDF): “Node 6, discovering the message is for itself, replaces the destination address by the source address&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==Group 4==&lt;br /&gt;
&lt;br /&gt;
* What were the purposes envisioned for computer networks? How do those compare with the uses they are put to today?&lt;br /&gt;
&lt;br /&gt;
Networks were envisioned as providing remote access to other computers, because useful resources such as computing power, large databases, and non-portable software were local to a particular computer, not themselves shared over the network.&lt;br /&gt;
&lt;br /&gt;
Today, we use networks mostly for sharing data, although with services like Amazon AWS, we&#039;re starting to share computing resources again.  We&#039;re also moving to support collaboration (e.g. Google Docs, GitHub, etc.).&lt;br /&gt;
&lt;br /&gt;
* What sort of resources were shared? What resources are shared today?&lt;br /&gt;
&lt;br /&gt;
Computing power was the key resource being shared; today, it&#039;s access to data.  (See above.)&lt;br /&gt;
&lt;br /&gt;
* What network architecture did they envision? Do we still have the same architecture?&lt;br /&gt;
&lt;br /&gt;
Surprisingly, yes: modern networks have substantially similar architecures to the ones described in these papers.  &lt;br /&gt;
Packet-switched networks are now ubiquitous.  We no longer bother with circuit-switching even for telephony, in contrast to the assumption that non-network data would continue to use the circuit-switched common-carrier network.  &lt;br /&gt;
&lt;br /&gt;
* What surprised you about this paper?&lt;br /&gt;
&lt;br /&gt;
We were surprised by the accuracy of the predictions given how early the paper was written — even things like electronic banking.  Also surprising were technological advances since the paper was written, such as data transfer speeds (we have networks that are faster than the integrated bus in the Alto), and the predicted resolution requirements (which we are nowhere near meeting).  The amount of detail in the description of the &#039;mouse pointing device&#039; was interesting too.&lt;br /&gt;
&lt;br /&gt;
* What was unclear? &lt;br /&gt;
&lt;br /&gt;
Nothing significant; we&#039;re looking at these with the benefit of hindsight.&lt;br /&gt;
&lt;br /&gt;
==Summary of the discussion from lecture==&lt;br /&gt;
Anil&#039;s view is that even these days we can imagine Computer Networks as more of a resource sharing platform. For example when we access the web or search Google we are making use of the resource sharing facilitated by the Internet(Network of interconnected Computer Networks). It&#039;s not possible to put 20,000 computers in our basements’, instead the Internet facilitates access to computing power/databases which are built of hundred thousands of computers. In fact Google and other popular search engines has a local copy of the entire web in their data centers, centralized copy of a large distributed system. Kind of a contradictory phenomenon if you think about in terms of the design goals of the distributed system. &lt;br /&gt;
&lt;br /&gt;
Another important takeaway from the discussion was the point that &amp;quot;Early to market/ first player&amp;quot; with a new product/solution to a niche problem and the one which offer solutions based on simple mechanisms as opposed to one relying on complex mechanism gets adopted faster. Classic example is the Internet. ARPANET which was supposed to be an academic research project which was based on simple mechanisms, open and first of its kind got adopted widely and evolved in to the Internet as we see it today. It is to note that this approach is not without its own drawbacks example being the security aspects were not factored in while designing the ARPANET since it was intended to be a network between trusted parties, which was fine then. But when ARPANET evolved in to the Internet, security aspect was one area which required a major focus on. In Silicon Valley the focus is on being the &amp;quot;first player&amp;quot; in a niche market to meet that objective often simple framework/mechanisms are used. In doing so there is also a possibility of leaving out some components which can turn out to be a vital missing link, recent example being security flaw in &#039;snapchat&#039; that lead to user data being exposed.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_3&amp;diff=18839</id>
		<title>DistOS 2014W Lecture 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_3&amp;diff=18839"/>
		<updated>2014-03-14T21:06:13Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Group 2 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==The Early Internet (Jan. 14)==&lt;br /&gt;
&lt;br /&gt;
* [https://homeostasis.scs.carleton.ca/~soma/distos/2014w/kahn1972-resource.pdf Robert E. Kahn, &amp;quot;Resource-Sharing Computer Communications Networks&amp;quot; (1972)]  [http://dx.doi.org/10.1109/PROC.1972.8911 (DOI)]&lt;br /&gt;
* [https://archive.org/details/ComputerNetworks_TheHeraldsOfResourceSharing Computer Networks: The Heralds of Resource Sharing (1972)] - video&lt;br /&gt;
&lt;br /&gt;
== Questions to consider: ==&lt;br /&gt;
# What were the purposes envisioned for computer networks?  How do those compare with the uses they are put to today?&lt;br /&gt;
# What sort of resources were shared?  What resources are shared today?&lt;br /&gt;
# What network architecture did they envision?  Do we still have the same architecture?&lt;br /&gt;
# What surprised you about this paper?&lt;br /&gt;
# What was unclear?&lt;br /&gt;
&lt;br /&gt;
==Group 1==&lt;br /&gt;
=== Discussion ===&lt;br /&gt;
The video was mostly a summary of Kahn&#039;s paper. It was outlined that process mitigation could be done through different zones of air traffic control. Back then, a &amp;quot;distributed OS&amp;quot; meant something different than we normally think about it now. This is because, when the paper was written, many people would be remotely logging onto a single machine. This type of infrastructure is very much like the cloud infrastructure that we see talk about and see today.&lt;br /&gt;
&lt;br /&gt;
The Alto paper referenced Kahn&#039;s paper, and the Alto designers had the foresight to see that networks such as ARPANET would be necessary. However, there are still some questions that come up in discussion, such as:&lt;br /&gt;
* Would it be useful to have a co-processor responsible for maintaining shared resources even today? Would this be like the IMPs of ARPANET? &lt;br /&gt;
Today, computers are usually so fast that it doesn&#039;t really seem to matter. This is still interesting to ruminate on, though.&lt;br /&gt;
&lt;br /&gt;
=== Questions ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What were the purposes envisioned for computer networks?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main purposes envisioned were:&lt;br /&gt;
* Big computation&lt;br /&gt;
* Storage&lt;br /&gt;
* Resource sharing&lt;br /&gt;
Essentially, being able to &amp;quot;have a library on a hard disk&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How do those compare with the uses they are put to today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Today, those things/goals are being done, but we are mostly seeing communication-based things such as instant messaging and email.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What sort of resources were shared?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main resources being shared were databases and CPU time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What resources are shared today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Storage is the main resource being shared today.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What network architecture did they envision?  &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The network architecture would make use of pack-switching. There would be a checksum and acknowledge on each packet and the IMPs were the network interface, and the routers.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Do we still have the same architecture?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Although, packet-switching definitely won, we do not have the same architecture now as the IP doesn&#039;t have a checksum or acknowledge. But TCP does have an end-to-end checksum and acknowledge. Kahn went on to learn fro the errors of ARPANET to design TCP/IP. Also, the job of the network interface and router have now been decoupled.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What surprised you about this paper?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Everything was surprising about this paper. How were they able to dl all of this? A network interface card and router were the size of a fridge! Some general things of note: &lt;br /&gt;
* High-level languages&lt;br /&gt;
* Bootstrapping protocols, bootstrapping applications&lt;br /&gt;
* Primitive computers&lt;br /&gt;
* Desktop publishing&lt;br /&gt;
* The logistics of running a cable from one university to another&lt;br /&gt;
* How old the idea of distributed operating system is&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What was unclear?&#039;&#039;&#039;&lt;br /&gt;
Much of the more technical specifications were unclear, but we mostly skipped over those&lt;br /&gt;
&lt;br /&gt;
==Group 2==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What were the purposes envisioned for computer networks?  How do those compare with the uses they are put to today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main purpose of early networks was resource sharing. Abstractions were used for transmission and message reliability was a by-product. The underlying idea is the same.&lt;br /&gt;
&lt;br /&gt;
Specialized Hardware/software and information sharing. super set of sharing.&lt;br /&gt;
&lt;br /&gt;
The AD-HOC routing was essentially TCP without saying it. Largely unchanged today.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What sort of resources were shared?  What resources are shared today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What network architecture did they envision?  Do we still have the same architecture?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What surprised you about this paper?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; What was unclear?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Group 3==&lt;br /&gt;
===Envisioned computer network purposes===&lt;br /&gt;
* Improving reliability of services, due to redundant resource sets&lt;br /&gt;
* Resource sharing&lt;br /&gt;
* Usage modes:t&lt;br /&gt;
** Users can use a remote terminal, from a remote office or home, to access those resources.&lt;br /&gt;
** Would allow centralization of resources, to improve ease of management and do away with inefficiencies&lt;br /&gt;
* Allow specialization of various sites. rather than each site trying to do it all&lt;br /&gt;
* Distributed simulations (notably air traffic control)&lt;br /&gt;
&lt;br /&gt;
Information-sharing is still relevant today, especially in research and large simulations. Remote access has mostly devolved into a specialized need.&lt;br /&gt;
&lt;br /&gt;
===Resources shared===&lt;br /&gt;
* Computing resources (especially expensive mainframes)&lt;br /&gt;
* Data sets&lt;br /&gt;
&lt;br /&gt;
===Network architecture===&lt;br /&gt;
* A primitive layered architecture&lt;br /&gt;
* Dedicated routing functions&lt;br /&gt;
* Various topologies:&lt;br /&gt;
** star&lt;br /&gt;
** loop&lt;br /&gt;
** bus&lt;br /&gt;
* Primarily (packet|mesage)-switched&lt;br /&gt;
** Circuit-switching too expensive and has large setup times&lt;br /&gt;
** Doesn&#039;t require committing resources&lt;br /&gt;
* Primitive flow control and buffering&lt;br /&gt;
* Predates proper congestion control such as Van Jacobsen&#039;s slow start&lt;br /&gt;
* Ad-hoc routing or based on something similar to RIP&lt;br /&gt;
* Anticipation of elephants and mice latency issues&lt;br /&gt;
* Unlike modern internet, error control and retransmission at every step&lt;br /&gt;
&lt;br /&gt;
The architecture today is similar, but the link-layer is very different: use of Ethernet and ATM. The modern internet is a collection of autonomous systems, not a single network. Routing propogation is now large-scale, and semi-automated (e.g., BGP externally, IS-IS and OSPF internally)&lt;br /&gt;
&lt;br /&gt;
===Surprising aspects===&lt;br /&gt;
&lt;br /&gt;
===Unclear portions===&lt;br /&gt;
* Weird packet format: Page 1400 (4 of PDF): “Node 6, discovering the message is for itself,&lt;br /&gt;
  replaces the destination address by the source address&lt;br /&gt;
&lt;br /&gt;
==Group 4==&lt;br /&gt;
&lt;br /&gt;
* What were the purposes envisioned for computer networks? How do those compare with the uses they are put to today?&lt;br /&gt;
&lt;br /&gt;
Networks were envisioned as providing remote access to other computers, because useful resources such as computing power, large databases, and non-portable software were local to a particular computer, not themselves shared over the network.&lt;br /&gt;
&lt;br /&gt;
Today, we use networks mostly for sharing data, although with services like Amazon AWS, we&#039;re starting to share computing resources again.  We&#039;re also moving to support collaboration (e.g. Google Docs, GitHub, etc.).&lt;br /&gt;
&lt;br /&gt;
* What sort of resources were shared? What resources are shared today?&lt;br /&gt;
&lt;br /&gt;
Computing power was the key resource being shared; today, it&#039;s access to data.  (See above.)&lt;br /&gt;
&lt;br /&gt;
* What network architecture did they envision? Do we still have the same architecture?&lt;br /&gt;
&lt;br /&gt;
Surprisingly, yes: modern networks have substantially similar architecures to the ones described in these papers.  &lt;br /&gt;
Packet-switched networks are now ubiquitous.  We no longer bother with circuit-switching even for telephony, in contrast to the assumption that non-network data would continue to use the circuit-switched common-carrier network.  &lt;br /&gt;
&lt;br /&gt;
* What surprised you about this paper?&lt;br /&gt;
&lt;br /&gt;
We were surprised by the accuracy of the predictions given how early the paper was written — even things like electronic banking.  Also surprising were technological advances since the paper was written, such as data transfer speeds (we have networks that are faster than the integrated bus in the Alto), and the predicted resolution requirements (which we are nowhere near meeting).  The amount of detail in the description of the &#039;mouse pointing device&#039; was interesting too.&lt;br /&gt;
&lt;br /&gt;
* What was unclear? &lt;br /&gt;
&lt;br /&gt;
Nothing significant; we&#039;re looking at these with the benefit of hindsight.&lt;br /&gt;
&lt;br /&gt;
==Summary of the discussion from lecture==&lt;br /&gt;
Anil&#039;s view is that even these days we can imagine Computer Networks as more of a resource sharing platform. For example when we access the web or search Google we are making use of the resource sharing facilitated by the Internet(Network of interconnected Computer Networks). It&#039;s not possible to put 20,000 computers in our basements’, instead the Internet facilitates access to computing power/databases which are built of hundred thousands of computers. In fact Google and other popular search engines has a local copy of the entire web in their data centers, centralized copy of a large distributed system. Kind of a contradictory phenomenon if you think about in terms of the design goals of the distributed system. &lt;br /&gt;
&lt;br /&gt;
Another important takeaway from the discussion was the point that &amp;quot;Early to market/ first player&amp;quot; with a new product/solution to a niche problem and the one which offer solutions based on simple mechanisms as opposed to one relying on complex mechanism gets adopted faster. Classic example is the Internet. ARPANET which was supposed to be an academic research project which was based on simple mechanisms, open and first of its kind got adopted widely and evolved in to the Internet as we see it today. It is to note that this approach is not without its own drawbacks example being the security aspects were not factored in while designing the ARPANET since it was intended to be a network between trusted parties, which was fine then. But when ARPANET evolved in to the Internet, security aspect was one area which required a major focus on. In Silicon Valley the focus is on being the &amp;quot;first player&amp;quot; in a niche market to meet that objective often simple framework/mechanisms are used. In doing so there is also a possibility of leaving out some components which can turn out to be a vital missing link, recent example being security flaw in &#039;snapchat&#039; that lead to user data being exposed.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_3&amp;diff=18838</id>
		<title>DistOS 2014W Lecture 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_3&amp;diff=18838"/>
		<updated>2014-03-14T21:01:54Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Questions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==The Early Internet (Jan. 14)==&lt;br /&gt;
&lt;br /&gt;
* [https://homeostasis.scs.carleton.ca/~soma/distos/2014w/kahn1972-resource.pdf Robert E. Kahn, &amp;quot;Resource-Sharing Computer Communications Networks&amp;quot; (1972)]  [http://dx.doi.org/10.1109/PROC.1972.8911 (DOI)]&lt;br /&gt;
* [https://archive.org/details/ComputerNetworks_TheHeraldsOfResourceSharing Computer Networks: The Heralds of Resource Sharing (1972)] - video&lt;br /&gt;
&lt;br /&gt;
== Questions to consider: ==&lt;br /&gt;
# What were the purposes envisioned for computer networks?  How do those compare with the uses they are put to today?&lt;br /&gt;
# What sort of resources were shared?  What resources are shared today?&lt;br /&gt;
# What network architecture did they envision?  Do we still have the same architecture?&lt;br /&gt;
# What surprised you about this paper?&lt;br /&gt;
# What was unclear?&lt;br /&gt;
&lt;br /&gt;
==Group 1==&lt;br /&gt;
=== Discussion ===&lt;br /&gt;
The video was mostly a summary of Kahn&#039;s paper. It was outlined that process mitigation could be done through different zones of air traffic control. Back then, a &amp;quot;distributed OS&amp;quot; meant something different than we normally think about it now. This is because, when the paper was written, many people would be remotely logging onto a single machine. This type of infrastructure is very much like the cloud infrastructure that we see talk about and see today.&lt;br /&gt;
&lt;br /&gt;
The Alto paper referenced Kahn&#039;s paper, and the Alto designers had the foresight to see that networks such as ARPANET would be necessary. However, there are still some questions that come up in discussion, such as:&lt;br /&gt;
* Would it be useful to have a co-processor responsible for maintaining shared resources even today? Would this be like the IMPs of ARPANET? &lt;br /&gt;
Today, computers are usually so fast that it doesn&#039;t really seem to matter. This is still interesting to ruminate on, though.&lt;br /&gt;
&lt;br /&gt;
=== Questions ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What were the purposes envisioned for computer networks?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main purposes envisioned were:&lt;br /&gt;
* Big computation&lt;br /&gt;
* Storage&lt;br /&gt;
* Resource sharing&lt;br /&gt;
Essentially, being able to &amp;quot;have a library on a hard disk&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How do those compare with the uses they are put to today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Today, those things/goals are being done, but we are mostly seeing communication-based things such as instant messaging and email.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What sort of resources were shared?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main resources being shared were databases and CPU time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What resources are shared today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Storage is the main resource being shared today.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What network architecture did they envision?  &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The network architecture would make use of pack-switching. There would be a checksum and acknowledge on each packet and the IMPs were the network interface, and the routers.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Do we still have the same architecture?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Although, packet-switching definitely won, we do not have the same architecture now as the IP doesn&#039;t have a checksum or acknowledge. But TCP does have an end-to-end checksum and acknowledge. Kahn went on to learn fro the errors of ARPANET to design TCP/IP. Also, the job of the network interface and router have now been decoupled.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What surprised you about this paper?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Everything was surprising about this paper. How were they able to dl all of this? A network interface card and router were the size of a fridge! Some general things of note: &lt;br /&gt;
* High-level languages&lt;br /&gt;
* Bootstrapping protocols, bootstrapping applications&lt;br /&gt;
* Primitive computers&lt;br /&gt;
* Desktop publishing&lt;br /&gt;
* The logistics of running a cable from one university to another&lt;br /&gt;
* How old the idea of distributed operating system is&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What was unclear?&#039;&#039;&#039;&lt;br /&gt;
Much of the more technical specifications were unclear, but we mostly skipped over those&lt;br /&gt;
&lt;br /&gt;
==Group 2==&lt;br /&gt;
1. The main purpose of early networks was resource sharing. Abstraction for transmission. Message reliability was a by-product. The underlying idea is the same.&lt;br /&gt;
&lt;br /&gt;
2. Specialized Hardware/software and information sharing. super set of sharing.&lt;br /&gt;
&lt;br /&gt;
3. AD-HOC routing, it was TCP without saying it. Largely unchanged today.&lt;br /&gt;
&lt;br /&gt;
==Group 3==&lt;br /&gt;
===Envisioned computer network purposes===&lt;br /&gt;
* Improving reliability of services, due to redundant resource sets&lt;br /&gt;
* Resource sharing&lt;br /&gt;
* Usage modes:t&lt;br /&gt;
** Users can use a remote terminal, from a remote office or home, to access those resources.&lt;br /&gt;
** Would allow centralization of resources, to improve ease of management and do away with inefficiencies&lt;br /&gt;
* Allow specialization of various sites. rather than each site trying to do it all&lt;br /&gt;
* Distributed simulations (notably air traffic control)&lt;br /&gt;
&lt;br /&gt;
Information-sharing is still relevant today, especially in research and large simulations. Remote access has mostly devolved into a specialized need.&lt;br /&gt;
&lt;br /&gt;
===Resources shared===&lt;br /&gt;
* Computing resources (especially expensive mainframes)&lt;br /&gt;
* Data sets&lt;br /&gt;
&lt;br /&gt;
===Network architecture===&lt;br /&gt;
* A primitive layered architecture&lt;br /&gt;
* Dedicated routing functions&lt;br /&gt;
* Various topologies:&lt;br /&gt;
** star&lt;br /&gt;
** loop&lt;br /&gt;
** bus&lt;br /&gt;
* Primarily (packet|mesage)-switched&lt;br /&gt;
** Circuit-switching too expensive and has large setup times&lt;br /&gt;
** Doesn&#039;t require committing resources&lt;br /&gt;
* Primitive flow control and buffering&lt;br /&gt;
* Predates proper congestion control such as Van Jacobsen&#039;s slow start&lt;br /&gt;
* Ad-hoc routing or based on something similar to RIP&lt;br /&gt;
* Anticipation of elephants and mice latency issues&lt;br /&gt;
* Unlike modern internet, error control and retransmission at every step&lt;br /&gt;
&lt;br /&gt;
The architecture today is similar, but the link-layer is very different: use of Ethernet and ATM. The modern internet is a collection of autonomous systems, not a single network. Routing propogation is now large-scale, and semi-automated (e.g., BGP externally, IS-IS and OSPF internally)&lt;br /&gt;
&lt;br /&gt;
===Surprising aspects===&lt;br /&gt;
&lt;br /&gt;
===Unclear portions===&lt;br /&gt;
* Weird packet format: Page 1400 (4 of PDF): “Node 6, discovering the message is for itself,&lt;br /&gt;
  replaces the destination address by the source address&lt;br /&gt;
&lt;br /&gt;
==Group 4==&lt;br /&gt;
&lt;br /&gt;
* What were the purposes envisioned for computer networks? How do those compare with the uses they are put to today?&lt;br /&gt;
&lt;br /&gt;
Networks were envisioned as providing remote access to other computers, because useful resources such as computing power, large databases, and non-portable software were local to a particular computer, not themselves shared over the network.&lt;br /&gt;
&lt;br /&gt;
Today, we use networks mostly for sharing data, although with services like Amazon AWS, we&#039;re starting to share computing resources again.  We&#039;re also moving to support collaboration (e.g. Google Docs, GitHub, etc.).&lt;br /&gt;
&lt;br /&gt;
* What sort of resources were shared? What resources are shared today?&lt;br /&gt;
&lt;br /&gt;
Computing power was the key resource being shared; today, it&#039;s access to data.  (See above.)&lt;br /&gt;
&lt;br /&gt;
* What network architecture did they envision? Do we still have the same architecture?&lt;br /&gt;
&lt;br /&gt;
Surprisingly, yes: modern networks have substantially similar architecures to the ones described in these papers.  &lt;br /&gt;
Packet-switched networks are now ubiquitous.  We no longer bother with circuit-switching even for telephony, in contrast to the assumption that non-network data would continue to use the circuit-switched common-carrier network.  &lt;br /&gt;
&lt;br /&gt;
* What surprised you about this paper?&lt;br /&gt;
&lt;br /&gt;
We were surprised by the accuracy of the predictions given how early the paper was written — even things like electronic banking.  Also surprising were technological advances since the paper was written, such as data transfer speeds (we have networks that are faster than the integrated bus in the Alto), and the predicted resolution requirements (which we are nowhere near meeting).  The amount of detail in the description of the &#039;mouse pointing device&#039; was interesting too.&lt;br /&gt;
&lt;br /&gt;
* What was unclear? &lt;br /&gt;
&lt;br /&gt;
Nothing significant; we&#039;re looking at these with the benefit of hindsight.&lt;br /&gt;
&lt;br /&gt;
==Summary of the discussion from lecture==&lt;br /&gt;
Anil&#039;s view is that even these days we can imagine Computer Networks as more of a resource sharing platform. For example when we access the web or search Google we are making use of the resource sharing facilitated by the Internet(Network of interconnected Computer Networks). It&#039;s not possible to put 20,000 computers in our basements’, instead the Internet facilitates access to computing power/databases which are built of hundred thousands of computers. In fact Google and other popular search engines has a local copy of the entire web in their data centers, centralized copy of a large distributed system. Kind of a contradictory phenomenon if you think about in terms of the design goals of the distributed system. &lt;br /&gt;
&lt;br /&gt;
Another important takeaway from the discussion was the point that &amp;quot;Early to market/ first player&amp;quot; with a new product/solution to a niche problem and the one which offer solutions based on simple mechanisms as opposed to one relying on complex mechanism gets adopted faster. Classic example is the Internet. ARPANET which was supposed to be an academic research project which was based on simple mechanisms, open and first of its kind got adopted widely and evolved in to the Internet as we see it today. It is to note that this approach is not without its own drawbacks example being the security aspects were not factored in while designing the ARPANET since it was intended to be a network between trusted parties, which was fine then. But when ARPANET evolved in to the Internet, security aspect was one area which required a major focus on. In Silicon Valley the focus is on being the &amp;quot;first player&amp;quot; in a niche market to meet that objective often simple framework/mechanisms are used. In doing so there is also a possibility of leaving out some components which can turn out to be a vital missing link, recent example being security flaw in &#039;snapchat&#039; that lead to user data being exposed.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_3&amp;diff=18837</id>
		<title>DistOS 2014W Lecture 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_3&amp;diff=18837"/>
		<updated>2014-03-14T21:01:38Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==The Early Internet (Jan. 14)==&lt;br /&gt;
&lt;br /&gt;
* [https://homeostasis.scs.carleton.ca/~soma/distos/2014w/kahn1972-resource.pdf Robert E. Kahn, &amp;quot;Resource-Sharing Computer Communications Networks&amp;quot; (1972)]  [http://dx.doi.org/10.1109/PROC.1972.8911 (DOI)]&lt;br /&gt;
* [https://archive.org/details/ComputerNetworks_TheHeraldsOfResourceSharing Computer Networks: The Heralds of Resource Sharing (1972)] - video&lt;br /&gt;
&lt;br /&gt;
== Questions to consider: ==&lt;br /&gt;
# What were the purposes envisioned for computer networks?  How do those compare with the uses they are put to today?&lt;br /&gt;
# What sort of resources were shared?  What resources are shared today?&lt;br /&gt;
# What network architecture did they envision?  Do we still have the same architecture?&lt;br /&gt;
# What surprised you about this paper?&lt;br /&gt;
# What was unclear?&lt;br /&gt;
&lt;br /&gt;
==Group 1==&lt;br /&gt;
=== Discussion ===&lt;br /&gt;
The video was mostly a summary of Kahn&#039;s paper. It was outlined that process mitigation could be done through different zones of air traffic control. Back then, a &amp;quot;distributed OS&amp;quot; meant something different than we normally think about it now. This is because, when the paper was written, many people would be remotely logging onto a single machine. This type of infrastructure is very much like the cloud infrastructure that we see talk about and see today.&lt;br /&gt;
&lt;br /&gt;
The Alto paper referenced Kahn&#039;s paper, and the Alto designers had the foresight to see that networks such as ARPANET would be necessary. However, there are still some questions that come up in discussion, such as:&lt;br /&gt;
* Would it be useful to have a co-processor responsible for maintaining shared resources even today? Would this be like the IMPs of ARPANET? &lt;br /&gt;
Today, computers are usually so fast that it doesn&#039;t really seem to matter. This is still interesting to ruminate on, though.&lt;br /&gt;
&lt;br /&gt;
=== Questions ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What were the purposes envisioned for computer networks?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main purposes envisioned were:&lt;br /&gt;
* Big computation&lt;br /&gt;
* Storage&lt;br /&gt;
* Resource sharing&lt;br /&gt;
Essentially, being able to &amp;quot;have a library on a hard disk&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How do those compare with the uses they are put to today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Today, those things/goals are being done, but we are mostly seeing communication-based things such as instant messaging and email.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What sort of resources were shared?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main resources being shared were databases and CPU time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What resources are shared today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Storage is the main resource being shared today.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What network architecture did they envision?  &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The network architecture would make use of pack-switching. There would be a checksum and acknowledge on each packet and the IMPs were the network interface, and the routers.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Do we still have the same architecture?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Although, packet-switching definitely won, we do not have the same architecture now as the IP doesn&#039;t have a checksum or acknowledge. But TCP does have an end-to-end checksum and acknowledge. Kahn went on to learn fro the errors of ARPANet to design TCP/IP. Also, the job of the network interface and router have now been decoupled.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What surprised you about this paper?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Everything was surprising about this paper. How were they able to dl all of this? A network interface card and router were the size of a fridge! Some general things of note: &lt;br /&gt;
* High-level languages&lt;br /&gt;
* Bootstrapping protocols, bootstrapping applications&lt;br /&gt;
* Primitive computers&lt;br /&gt;
* Desktop publishing&lt;br /&gt;
* The logistics of running a cable from one university to another&lt;br /&gt;
* How old the idea of distributed operating system is&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What was unclear?&#039;&#039;&#039;&lt;br /&gt;
Much of the more technical specifications were unclear, but we mostly skipped over those&lt;br /&gt;
&lt;br /&gt;
==Group 2==&lt;br /&gt;
1. The main purpose of early networks was resource sharing. Abstraction for transmission. Message reliability was a by-product. The underlying idea is the same.&lt;br /&gt;
&lt;br /&gt;
2. Specialized Hardware/software and information sharing. super set of sharing.&lt;br /&gt;
&lt;br /&gt;
3. AD-HOC routing, it was TCP without saying it. Largely unchanged today.&lt;br /&gt;
&lt;br /&gt;
==Group 3==&lt;br /&gt;
===Envisioned computer network purposes===&lt;br /&gt;
* Improving reliability of services, due to redundant resource sets&lt;br /&gt;
* Resource sharing&lt;br /&gt;
* Usage modes:t&lt;br /&gt;
** Users can use a remote terminal, from a remote office or home, to access those resources.&lt;br /&gt;
** Would allow centralization of resources, to improve ease of management and do away with inefficiencies&lt;br /&gt;
* Allow specialization of various sites. rather than each site trying to do it all&lt;br /&gt;
* Distributed simulations (notably air traffic control)&lt;br /&gt;
&lt;br /&gt;
Information-sharing is still relevant today, especially in research and large simulations. Remote access has mostly devolved into a specialized need.&lt;br /&gt;
&lt;br /&gt;
===Resources shared===&lt;br /&gt;
* Computing resources (especially expensive mainframes)&lt;br /&gt;
* Data sets&lt;br /&gt;
&lt;br /&gt;
===Network architecture===&lt;br /&gt;
* A primitive layered architecture&lt;br /&gt;
* Dedicated routing functions&lt;br /&gt;
* Various topologies:&lt;br /&gt;
** star&lt;br /&gt;
** loop&lt;br /&gt;
** bus&lt;br /&gt;
* Primarily (packet|mesage)-switched&lt;br /&gt;
** Circuit-switching too expensive and has large setup times&lt;br /&gt;
** Doesn&#039;t require committing resources&lt;br /&gt;
* Primitive flow control and buffering&lt;br /&gt;
* Predates proper congestion control such as Van Jacobsen&#039;s slow start&lt;br /&gt;
* Ad-hoc routing or based on something similar to RIP&lt;br /&gt;
* Anticipation of elephants and mice latency issues&lt;br /&gt;
* Unlike modern internet, error control and retransmission at every step&lt;br /&gt;
&lt;br /&gt;
The architecture today is similar, but the link-layer is very different: use of Ethernet and ATM. The modern internet is a collection of autonomous systems, not a single network. Routing propogation is now large-scale, and semi-automated (e.g., BGP externally, IS-IS and OSPF internally)&lt;br /&gt;
&lt;br /&gt;
===Surprising aspects===&lt;br /&gt;
&lt;br /&gt;
===Unclear portions===&lt;br /&gt;
* Weird packet format: Page 1400 (4 of PDF): “Node 6, discovering the message is for itself,&lt;br /&gt;
  replaces the destination address by the source address&lt;br /&gt;
&lt;br /&gt;
==Group 4==&lt;br /&gt;
&lt;br /&gt;
* What were the purposes envisioned for computer networks? How do those compare with the uses they are put to today?&lt;br /&gt;
&lt;br /&gt;
Networks were envisioned as providing remote access to other computers, because useful resources such as computing power, large databases, and non-portable software were local to a particular computer, not themselves shared over the network.&lt;br /&gt;
&lt;br /&gt;
Today, we use networks mostly for sharing data, although with services like Amazon AWS, we&#039;re starting to share computing resources again.  We&#039;re also moving to support collaboration (e.g. Google Docs, GitHub, etc.).&lt;br /&gt;
&lt;br /&gt;
* What sort of resources were shared? What resources are shared today?&lt;br /&gt;
&lt;br /&gt;
Computing power was the key resource being shared; today, it&#039;s access to data.  (See above.)&lt;br /&gt;
&lt;br /&gt;
* What network architecture did they envision? Do we still have the same architecture?&lt;br /&gt;
&lt;br /&gt;
Surprisingly, yes: modern networks have substantially similar architecures to the ones described in these papers.  &lt;br /&gt;
Packet-switched networks are now ubiquitous.  We no longer bother with circuit-switching even for telephony, in contrast to the assumption that non-network data would continue to use the circuit-switched common-carrier network.  &lt;br /&gt;
&lt;br /&gt;
* What surprised you about this paper?&lt;br /&gt;
&lt;br /&gt;
We were surprised by the accuracy of the predictions given how early the paper was written — even things like electronic banking.  Also surprising were technological advances since the paper was written, such as data transfer speeds (we have networks that are faster than the integrated bus in the Alto), and the predicted resolution requirements (which we are nowhere near meeting).  The amount of detail in the description of the &#039;mouse pointing device&#039; was interesting too.&lt;br /&gt;
&lt;br /&gt;
* What was unclear? &lt;br /&gt;
&lt;br /&gt;
Nothing significant; we&#039;re looking at these with the benefit of hindsight.&lt;br /&gt;
&lt;br /&gt;
==Summary of the discussion from lecture==&lt;br /&gt;
Anil&#039;s view is that even these days we can imagine Computer Networks as more of a resource sharing platform. For example when we access the web or search Google we are making use of the resource sharing facilitated by the Internet(Network of interconnected Computer Networks). It&#039;s not possible to put 20,000 computers in our basements’, instead the Internet facilitates access to computing power/databases which are built of hundred thousands of computers. In fact Google and other popular search engines has a local copy of the entire web in their data centers, centralized copy of a large distributed system. Kind of a contradictory phenomenon if you think about in terms of the design goals of the distributed system. &lt;br /&gt;
&lt;br /&gt;
Another important takeaway from the discussion was the point that &amp;quot;Early to market/ first player&amp;quot; with a new product/solution to a niche problem and the one which offer solutions based on simple mechanisms as opposed to one relying on complex mechanism gets adopted faster. Classic example is the Internet. ARPANET which was supposed to be an academic research project which was based on simple mechanisms, open and first of its kind got adopted widely and evolved in to the Internet as we see it today. It is to note that this approach is not without its own drawbacks example being the security aspects were not factored in while designing the ARPANET since it was intended to be a network between trusted parties, which was fine then. But when ARPANET evolved in to the Internet, security aspect was one area which required a major focus on. In Silicon Valley the focus is on being the &amp;quot;first player&amp;quot; in a niche market to meet that objective often simple framework/mechanisms are used. In doing so there is also a possibility of leaving out some components which can turn out to be a vital missing link, recent example being security flaw in &#039;snapchat&#039; that lead to user data being exposed.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_3&amp;diff=18836</id>
		<title>DistOS 2014W Lecture 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_3&amp;diff=18836"/>
		<updated>2014-03-14T21:00:35Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Group 1 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==The Early Internet (Jan. 14)==&lt;br /&gt;
&lt;br /&gt;
* [https://homeostasis.scs.carleton.ca/~soma/distos/2014w/kahn1972-resource.pdf Robert E. Kahn, &amp;quot;Resource-Sharing Computer Communications Networks&amp;quot; (1972)]  [http://dx.doi.org/10.1109/PROC.1972.8911 (DOI)]&lt;br /&gt;
* [https://archive.org/details/ComputerNetworks_TheHeraldsOfResourceSharing Computer Networks: The Heralds of Resource Sharing (1972)] - video&lt;br /&gt;
&lt;br /&gt;
== Questions to consider: ==&lt;br /&gt;
# What were the purposes envisioned for computer networks?  How do those compare with the uses they are put to today?&lt;br /&gt;
# What sort of resources were shared?  What resources are shared today?&lt;br /&gt;
# What network architecture did they envision?  Do we still have the same architecture?&lt;br /&gt;
# What surprised you about this paper?&lt;br /&gt;
# What was unclear?&lt;br /&gt;
&lt;br /&gt;
==Group 1==&lt;br /&gt;
=== Discussion ===&lt;br /&gt;
The video was mostly a summary of Kahn&#039;s paper. It was outlined that process mitigation could be done through different zones of air traffic control. Back then, a &amp;quot;distributed OS&amp;quot; meant something different than we normally think about it now. This is because, when the paper was written, many people would be remotely logging onto a single machine. This type of infrastructure is very much like the cloud infrastructure that we see talk about and see today.&lt;br /&gt;
&lt;br /&gt;
The Alto paper referenced Kahn&#039;s paper, and the Alto designers had the foresight to see that networks such as ARPANet would be necessary. However, there are still some questions that come up in discussion, such as:&lt;br /&gt;
* Would it be useful to have a co-processor responsible for maintaining shared resources even today? Would this be like the IMPs of ARPANet? &lt;br /&gt;
Today, computers are usually so fast that it doesn&#039;t really seem to matter. This is still interesting to ruminate on, though.&lt;br /&gt;
&lt;br /&gt;
=== Questions ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What were the purposes envisioned for computer networks?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main purposes envisioned were:&lt;br /&gt;
* Big computation&lt;br /&gt;
* Storage&lt;br /&gt;
* Resource sharing&lt;br /&gt;
Essentially, being able to &amp;quot;have a library on a hard disk&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How do those compare with the uses they are put to today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Today, those things/goals are being done, but we are mostly seeing communication-based things such as instant messaging and email.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What sort of resources were shared?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The main resources being shared were databases and CPU time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What resources are shared today?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Storage is the main resource being shared today.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What network architecture did they envision?  &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The network architecture would make use of pack-switching. There would be a checksum and acknowledge on each packet and the IMPs were the network interface, and the routers.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Do we still have the same architecture?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Although, packet-switching definitely won, we do not have the same architecture now as the IP doesn&#039;t have a checksum or acknowledge. But TCP does have an end-to-end checksum and acknowledge. Kahn went on to learn fro the errors of ARPANet to design TCP/IP. Also, the job of the network interface and router have now been decoupled.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What surprised you about this paper?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Everything was surprising about this paper. How were they able to dl all of this? A network interface card and router were the size of a fridge! Some general things of note: &lt;br /&gt;
* High-level languages&lt;br /&gt;
* Bootstrapping protocols, bootstrapping applications&lt;br /&gt;
* Primitive computers&lt;br /&gt;
* Desktop publishing&lt;br /&gt;
* The logistics of running a cable from one university to another&lt;br /&gt;
* How old the idea of distributed operating system is&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What was unclear?&#039;&#039;&#039;&lt;br /&gt;
Much of the more technical specifications were unclear, but we mostly skipped over those&lt;br /&gt;
&lt;br /&gt;
==Group 2==&lt;br /&gt;
1. The main purpose of early networks was resource sharing. Abstraction for transmission. Message reliability was a by-product. The underlying idea is the same.&lt;br /&gt;
&lt;br /&gt;
2. Specialized Hardware/software and information sharing. super set of sharing.&lt;br /&gt;
&lt;br /&gt;
3. AD-HOC routing, it was TCP without saying it. Largely unchanged today.&lt;br /&gt;
&lt;br /&gt;
==Group 3==&lt;br /&gt;
===Envisioned computer network purposes===&lt;br /&gt;
* Improving reliability of services, due to redundant resource sets&lt;br /&gt;
* Resource sharing&lt;br /&gt;
* Usage modes:t&lt;br /&gt;
** Users can use a remote terminal, from a remote office or home, to access those resources.&lt;br /&gt;
** Would allow centralization of resources, to improve ease of management and do away with inefficiencies&lt;br /&gt;
* Allow specialization of various sites. rather than each site trying to do it all&lt;br /&gt;
* Distributed simulations (notably air traffic control)&lt;br /&gt;
&lt;br /&gt;
Information-sharing is still relevant today, especially in research and large simulations. Remote access has mostly devolved into a specialized need.&lt;br /&gt;
&lt;br /&gt;
===Resources shared===&lt;br /&gt;
* Computing resources (especially expensive mainframes)&lt;br /&gt;
* Data sets&lt;br /&gt;
&lt;br /&gt;
===Network architecture===&lt;br /&gt;
* A primitive layered architecture&lt;br /&gt;
* Dedicated routing functions&lt;br /&gt;
* Various topologies:&lt;br /&gt;
** star&lt;br /&gt;
** loop&lt;br /&gt;
** bus&lt;br /&gt;
* Primarily (packet|mesage)-switched&lt;br /&gt;
** Circuit-switching too expensive and has large setup times&lt;br /&gt;
** Doesn&#039;t require committing resources&lt;br /&gt;
* Primitive flow control and buffering&lt;br /&gt;
* Predates proper congestion control such as Van Jacobsen&#039;s slow start&lt;br /&gt;
* Ad-hoc routing or based on something similar to RIP&lt;br /&gt;
* Anticipation of elephants and mice latency issues&lt;br /&gt;
* Unlike modern internet, error control and retransmission at every step&lt;br /&gt;
&lt;br /&gt;
The architecture today is similar, but the link-layer is very different: use of Ethernet and ATM. The modern internet is a collection of autonomous systems, not a single network. Routing propogation is now large-scale, and semi-automated (e.g., BGP externally, IS-IS and OSPF internally)&lt;br /&gt;
&lt;br /&gt;
===Surprising aspects===&lt;br /&gt;
&lt;br /&gt;
===Unclear portions===&lt;br /&gt;
* Weird packet format: Page 1400 (4 of PDF): “Node 6, discovering the message is for itself,&lt;br /&gt;
  replaces the destination address by the source address&lt;br /&gt;
&lt;br /&gt;
==Group 4==&lt;br /&gt;
&lt;br /&gt;
* What were the purposes envisioned for computer networks? How do those compare with the uses they are put to today?&lt;br /&gt;
&lt;br /&gt;
Networks were envisioned as providing remote access to other computers, because useful resources such as computing power, large databases, and non-portable software were local to a particular computer, not themselves shared over the network.&lt;br /&gt;
&lt;br /&gt;
Today, we use networks mostly for sharing data, although with services like Amazon AWS, we&#039;re starting to share computing resources again.  We&#039;re also moving to support collaboration (e.g. Google Docs, GitHub, etc.).&lt;br /&gt;
&lt;br /&gt;
* What sort of resources were shared? What resources are shared today?&lt;br /&gt;
&lt;br /&gt;
Computing power was the key resource being shared; today, it&#039;s access to data.  (See above.)&lt;br /&gt;
&lt;br /&gt;
* What network architecture did they envision? Do we still have the same architecture?&lt;br /&gt;
&lt;br /&gt;
Surprisingly, yes: modern networks have substantially similar architecures to the ones described in these papers.  &lt;br /&gt;
Packet-switched networks are now ubiquitous.  We no longer bother with circuit-switching even for telephony, in contrast to the assumption that non-network data would continue to use the circuit-switched common-carrier network.  &lt;br /&gt;
&lt;br /&gt;
* What surprised you about this paper?&lt;br /&gt;
&lt;br /&gt;
We were surprised by the accuracy of the predictions given how early the paper was written — even things like electronic banking.  Also surprising were technological advances since the paper was written, such as data transfer speeds (we have networks that are faster than the integrated bus in the Alto), and the predicted resolution requirements (which we are nowhere near meeting).  The amount of detail in the description of the &#039;mouse pointing device&#039; was interesting too.&lt;br /&gt;
&lt;br /&gt;
* What was unclear? &lt;br /&gt;
&lt;br /&gt;
Nothing significant; we&#039;re looking at these with the benefit of hindsight.&lt;br /&gt;
&lt;br /&gt;
==Summary of the discussion from lecture==&lt;br /&gt;
Anil&#039;s view is that even these days we can imagine Computer Networks as more of a resource sharing platform. For example when we access the web or search Google we are making use of the resource sharing facilitated by the Internet(Network of interconnected Computer Networks). It&#039;s not possible to put 20,000 computers in our basements’, instead the Internet facilitates access to computing power/databases which are built of hundred thousands of computers. In fact Google and other popular search engines has a local copy of the entire web in their data centers, centralized copy of a large distributed system. Kind of a contradictory phenomenon if you think about in terms of the design goals of the distributed system. &lt;br /&gt;
&lt;br /&gt;
Another important takeaway from the discussion was the point that &amp;quot;Early to market/ first player&amp;quot; with a new product/solution to a niche problem and the one which offer solutions based on simple mechanisms as opposed to one relying on complex mechanism gets adopted faster. Classic example is the Internet. ARPANET which was supposed to be an academic research project which was based on simple mechanisms, open and first of its kind got adopted widely and evolved in to the Internet as we see it today. It is to note that this approach is not without its own drawbacks example being the security aspects were not factored in while designing the ARPANET since it was intended to be a network between trusted parties, which was fine then. But when ARPANET evolved in to the Internet, security aspect was one area which required a major focus on. In Silicon Valley the focus is on being the &amp;quot;first player&amp;quot; in a niche market to meet that objective often simple framework/mechanisms are used. In doing so there is also a possibility of leaving out some components which can turn out to be a vital missing link, recent example being security flaw in &#039;snapchat&#039; that lead to user data being exposed.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_3&amp;diff=18835</id>
		<title>DistOS 2014W Lecture 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_3&amp;diff=18835"/>
		<updated>2014-03-14T20:46:04Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Questions to consider: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==The Early Internet (Jan. 14)==&lt;br /&gt;
&lt;br /&gt;
* [https://homeostasis.scs.carleton.ca/~soma/distos/2014w/kahn1972-resource.pdf Robert E. Kahn, &amp;quot;Resource-Sharing Computer Communications Networks&amp;quot; (1972)]  [http://dx.doi.org/10.1109/PROC.1972.8911 (DOI)]&lt;br /&gt;
* [https://archive.org/details/ComputerNetworks_TheHeraldsOfResourceSharing Computer Networks: The Heralds of Resource Sharing (1972)] - video&lt;br /&gt;
&lt;br /&gt;
== Questions to consider: ==&lt;br /&gt;
# What were the purposes envisioned for computer networks?  How do those compare with the uses they are put to today?&lt;br /&gt;
# What sort of resources were shared?  What resources are shared today?&lt;br /&gt;
# What network architecture did they envision?  Do we still have the same architecture?&lt;br /&gt;
# What surprised you about this paper?&lt;br /&gt;
# What was unclear?&lt;br /&gt;
&lt;br /&gt;
==Group 1==&lt;br /&gt;
* video was mostly a summary of Kahn&#039;s paper&lt;br /&gt;
* process migration through different zones of air traffic control&lt;br /&gt;
* &amp;quot;distributed OS&amp;quot; meant something different than we normally think about, because many people would log in remotely to a single machine, it is very much like cloud infrastructure that we talk about today&lt;br /&gt;
* alto paper makes reference to Kahn&#039;s paper, and the alto designers had the foresight to see that networks like arpanet would be necessary&lt;br /&gt;
* would it be useful to have a co-processor responsible for maintaining shared resources even today?  Like the IMPs of the arpanet?  Today, computers are usually so fast it doesn&#039;t really matter.&lt;br /&gt;
&lt;br /&gt;
=== Questions ===&lt;br /&gt;
&lt;br /&gt;
* What were the purposes envisioned for computer networks?&lt;br /&gt;
** big computation, storage, resource sharing - &amp;quot;having a library on a hard disk&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* How do those compare with the uses they are put to today?&lt;br /&gt;
** those things are being done, but mostly communication like instant messaging, email&lt;br /&gt;
&lt;br /&gt;
* What sort of resources were shared?&lt;br /&gt;
** databases, CPU time&lt;br /&gt;
&lt;br /&gt;
* What resources are shared today?&lt;br /&gt;
** mostly storage&lt;br /&gt;
&lt;br /&gt;
* What network architecture did they envision?  &lt;br /&gt;
** they had a checksum and acknowledge on each packet&lt;br /&gt;
** the IMPs were the network interface and the routers&lt;br /&gt;
** packet-switching&lt;br /&gt;
&lt;br /&gt;
* Do we still have the same architecture?&lt;br /&gt;
** packet-switching definitely won&lt;br /&gt;
** no, now IP doesn&#039;t checksum or acknowledge, but TCP has end-to-end checksum and acknowledge&lt;br /&gt;
** Kahn went on to learn from the errors of arpanet to design TCP/IP&lt;br /&gt;
** the job of network interface and router have been decoupled&lt;br /&gt;
&lt;br /&gt;
* What surprised you about this paper?&lt;br /&gt;
** everything&lt;br /&gt;
** how they were able to do this&lt;br /&gt;
** a network interface card and router was the size of a fridge&lt;br /&gt;
** high-level languages&lt;br /&gt;
** bootstrap protocol, bootstrapping an application&lt;br /&gt;
** primitive computers&lt;br /&gt;
** desktop publishing&lt;br /&gt;
** the logistics of running a cable from one university to another&lt;br /&gt;
** how old the idea of distributed operating systems is&lt;br /&gt;
&lt;br /&gt;
* What was unclear?&lt;br /&gt;
** much of the more technical specifications, but we mostly skipped over those&lt;br /&gt;
&lt;br /&gt;
==Group 2==&lt;br /&gt;
1. The main purpose of early networks was resource sharing. Abstraction for transmission. Message reliability was a by-product. The underlying idea is the same.&lt;br /&gt;
&lt;br /&gt;
2. Specialized Hardware/software and information sharing. super set of sharing.&lt;br /&gt;
&lt;br /&gt;
3. AD-HOC routing, it was TCP without saying it. Largely unchanged today.&lt;br /&gt;
&lt;br /&gt;
==Group 3==&lt;br /&gt;
===Envisioned computer network purposes===&lt;br /&gt;
* Improving reliability of services, due to redundant resource sets&lt;br /&gt;
* Resource sharing&lt;br /&gt;
* Usage modes:t&lt;br /&gt;
** Users can use a remote terminal, from a remote office or home, to access those resources.&lt;br /&gt;
** Would allow centralization of resources, to improve ease of management and do away with inefficiencies&lt;br /&gt;
* Allow specialization of various sites. rather than each site trying to do it all&lt;br /&gt;
* Distributed simulations (notably air traffic control)&lt;br /&gt;
&lt;br /&gt;
Information-sharing is still relevant today, especially in research and large simulations. Remote access has mostly devolved into a specialized need.&lt;br /&gt;
&lt;br /&gt;
===Resources shared===&lt;br /&gt;
* Computing resources (especially expensive mainframes)&lt;br /&gt;
* Data sets&lt;br /&gt;
&lt;br /&gt;
===Network architecture===&lt;br /&gt;
* A primitive layered architecture&lt;br /&gt;
* Dedicated routing functions&lt;br /&gt;
* Various topologies:&lt;br /&gt;
** star&lt;br /&gt;
** loop&lt;br /&gt;
** bus&lt;br /&gt;
* Primarily (packet|mesage)-switched&lt;br /&gt;
** Circuit-switching too expensive and has large setup times&lt;br /&gt;
** Doesn&#039;t require committing resources&lt;br /&gt;
* Primitive flow control and buffering&lt;br /&gt;
* Predates proper congestion control such as Van Jacobsen&#039;s slow start&lt;br /&gt;
* Ad-hoc routing or based on something similar to RIP&lt;br /&gt;
* Anticipation of elephants and mice latency issues&lt;br /&gt;
* Unlike modern internet, error control and retransmission at every step&lt;br /&gt;
&lt;br /&gt;
The architecture today is similar, but the link-layer is very different: use of Ethernet and ATM. The modern internet is a collection of autonomous systems, not a single network. Routing propogation is now large-scale, and semi-automated (e.g., BGP externally, IS-IS and OSPF internally)&lt;br /&gt;
&lt;br /&gt;
===Surprising aspects===&lt;br /&gt;
&lt;br /&gt;
===Unclear portions===&lt;br /&gt;
* Weird packet format: Page 1400 (4 of PDF): “Node 6, discovering the message is for itself,&lt;br /&gt;
  replaces the destination address by the source address&lt;br /&gt;
&lt;br /&gt;
==Group 4==&lt;br /&gt;
&lt;br /&gt;
* What were the purposes envisioned for computer networks? How do those compare with the uses they are put to today?&lt;br /&gt;
&lt;br /&gt;
Networks were envisioned as providing remote access to other computers, because useful resources such as computing power, large databases, and non-portable software were local to a particular computer, not themselves shared over the network.&lt;br /&gt;
&lt;br /&gt;
Today, we use networks mostly for sharing data, although with services like Amazon AWS, we&#039;re starting to share computing resources again.  We&#039;re also moving to support collaboration (e.g. Google Docs, GitHub, etc.).&lt;br /&gt;
&lt;br /&gt;
* What sort of resources were shared? What resources are shared today?&lt;br /&gt;
&lt;br /&gt;
Computing power was the key resource being shared; today, it&#039;s access to data.  (See above.)&lt;br /&gt;
&lt;br /&gt;
* What network architecture did they envision? Do we still have the same architecture?&lt;br /&gt;
&lt;br /&gt;
Surprisingly, yes: modern networks have substantially similar architecures to the ones described in these papers.  &lt;br /&gt;
Packet-switched networks are now ubiquitous.  We no longer bother with circuit-switching even for telephony, in contrast to the assumption that non-network data would continue to use the circuit-switched common-carrier network.  &lt;br /&gt;
&lt;br /&gt;
* What surprised you about this paper?&lt;br /&gt;
&lt;br /&gt;
We were surprised by the accuracy of the predictions given how early the paper was written — even things like electronic banking.  Also surprising were technological advances since the paper was written, such as data transfer speeds (we have networks that are faster than the integrated bus in the Alto), and the predicted resolution requirements (which we are nowhere near meeting).  The amount of detail in the description of the &#039;mouse pointing device&#039; was interesting too.&lt;br /&gt;
&lt;br /&gt;
* What was unclear? &lt;br /&gt;
&lt;br /&gt;
Nothing significant; we&#039;re looking at these with the benefit of hindsight.&lt;br /&gt;
&lt;br /&gt;
==Summary of the discussion from lecture==&lt;br /&gt;
Anil&#039;s view is that even these days we can imagine Computer Networks as more of a resource sharing platform. For example when we access the web or search Google we are making use of the resource sharing facilitated by the Internet(Network of interconnected Computer Networks). It&#039;s not possible to put 20,000 computers in our basements’, instead the Internet facilitates access to computing power/databases which are built of hundred thousands of computers. In fact Google and other popular search engines has a local copy of the entire web in their data centers, centralized copy of a large distributed system. Kind of a contradictory phenomenon if you think about in terms of the design goals of the distributed system. &lt;br /&gt;
&lt;br /&gt;
Another important takeaway from the discussion was the point that &amp;quot;Early to market/ first player&amp;quot; with a new product/solution to a niche problem and the one which offer solutions based on simple mechanisms as opposed to one relying on complex mechanism gets adopted faster. Classic example is the Internet. ARPANET which was supposed to be an academic research project which was based on simple mechanisms, open and first of its kind got adopted widely and evolved in to the Internet as we see it today. It is to note that this approach is not without its own drawbacks example being the security aspects were not factored in while designing the ARPANET since it was intended to be a network between trusted parties, which was fine then. But when ARPANET evolved in to the Internet, security aspect was one area which required a major focus on. In Silicon Valley the focus is on being the &amp;quot;first player&amp;quot; in a niche market to meet that objective often simple framework/mechanisms are used. In doing so there is also a possibility of leaving out some components which can turn out to be a vital missing link, recent example being security flaw in &#039;snapchat&#039; that lead to user data being exposed.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_17&amp;diff=18816</id>
		<title>DistOS 2014W Lecture 17</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_17&amp;diff=18816"/>
		<updated>2014-03-13T14:47:43Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Point of lit review */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is a Literature Review? ==&lt;br /&gt;
We shouldn&#039;t summarize everything. Instead, we should be doing a critical analysis/comparison of the all the work in a chosen area. We should cover all of the significant points in said chosen area. &lt;br /&gt;
&lt;br /&gt;
Tips for a literature review with Anil:&lt;br /&gt;
* Try an organize the papers thematically rather than presenting them all linearly (i.e., &amp;quot;This person did this, this person did this, etc.&amp;quot;)&lt;br /&gt;
* Categorize papers into several themes/categories that relate to your topic and present the papers as part of a theme. This makes it easier to compare/contrast similar papers and to see how all the different chunks of knowledge/work relates.&lt;br /&gt;
&lt;br /&gt;
== Point of lit review ==&lt;br /&gt;
&lt;br /&gt;
With a research proposal, you would be asking, &amp;quot;What can be done to plug the holes here?&amp;quot; The point of a literature review, in this sense, is to try and offer new interpretations, theoretical approaches, or other ideas. &lt;br /&gt;
&lt;br /&gt;
A more traditional literature review comprises of providing a critical overview of the current state of research efforts.&lt;br /&gt;
&lt;br /&gt;
A stand alone literature review of articles best fits what we&#039;re doing in this class. This typically involves:&lt;br /&gt;
* Overview and analysis of the current state of the art in your chosen topic.&lt;br /&gt;
* Evaluate and compare all the research in this chosen area.&lt;br /&gt;
* It might be difficult to really show weaknesses and gaps in the area as you tend to need to be an expert in the area to find these gaps. But, you can criticize the overall body of work and say what&#039;s missing and give general areas of future work.&lt;br /&gt;
&lt;br /&gt;
== Structure of a Research Paper ==&lt;br /&gt;
* Abstract&lt;br /&gt;
* Introduction&lt;br /&gt;
* Literature Review (about 1/4 to 1/3 of your thesis, typically, so if you can do the literature review for your thesis in a class, that&#039;s a great bonus)&lt;br /&gt;
* Methods&lt;br /&gt;
* Results&lt;br /&gt;
* Discussion&lt;br /&gt;
* Conclusion&lt;br /&gt;
&lt;br /&gt;
== Finding Sources ==&lt;br /&gt;
&lt;br /&gt;
A good way to go is to first look at tertiary sources (e.g., Wikipedia articles) to get a general idea of the area and what it&#039;s about. Following this, you can go into secondary sources to determine the history and general themes of the area. When you have a better understanding of the area overall, jump into your primary sources and get into the &amp;quot;nitty gritty&amp;quot; of the most current research.&lt;br /&gt;
&lt;br /&gt;
A good way to find articles and papers to cite is &amp;quot;footnote chasing&amp;quot;. In one of the papers you want to include, look at their related work and footnotes. These tend to be good citations that you should look up and include in your work--it helps you find out about the area as a whole, giving you a good base to start on. &lt;br /&gt;
&lt;br /&gt;
== Tips Writing the Literature Review ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 1:&#039;&#039;&#039;&lt;br /&gt;
Have one document that has a list of all your sources. With each source in the document, have a short paragraph/blurb of what the paper is about, how it works, what they did well/didn&#039;t do well, etc. This will show that you understood the papers and will save you having to read them all again later.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 2:&#039;&#039;&#039;&lt;br /&gt;
Once again, thematic organization. Start to classify your papers into a few categories relating to your topic. This makes it easier to do things such as discoursing on the area overall in the introduction, abstract, etc. It also allows you to better organize the paper later and to compare/contrast them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 3:&#039;&#039;&#039;&lt;br /&gt;
Once you have the categories, this is where the paragraphs you&#039;ve written in Step 1 come in handy. You can write an introduction for each of your categories then just dump in the paragraphs from your document. Following this, you can polish your sections and better integrate them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;General Notes:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Have a good introduction, outlining exactly what you&#039;re going to cover, why it&#039;s important, etc.&lt;br /&gt;
* Make sure to keep things high-level--you shouldn&#039;t have to be an expert in the sub-topic you&#039;ve chosen to be able to read and understand your literature review.&lt;br /&gt;
* Make sure to be selective with what your&#039;e including. Include all of the really current things and what is historically relevant for your topic (i.e., either very current work or very important, groundbreaking work for the topic). Intermediary work doesn&#039;t always need to be included. Don&#039;t include citations just to get your citation count up, this actually detracts from your literature review.&lt;br /&gt;
&lt;br /&gt;
== Final Notes ==&lt;br /&gt;
&lt;br /&gt;
* Focus on having a good introduction.&lt;br /&gt;
* Setting up categories properly. &lt;br /&gt;
* Having a good introduction for each category. &lt;br /&gt;
* A good overview of each paper in the category. &lt;br /&gt;
* If possible, a table to summarize everything from a section.&lt;br /&gt;
&lt;br /&gt;
Really avoid coming to the conclusion that this work is great, but it would be really great if this existed. And then you find out that that actually exists. Make sure before you reach a conclusion like this, that it &#039;&#039;&#039;doesn&#039;t&#039;&#039;&#039; exist already, otherwise your work doesn&#039;t look thorough.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_17&amp;diff=18815</id>
		<title>DistOS 2014W Lecture 17</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_17&amp;diff=18815"/>
		<updated>2014-03-13T14:44:28Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Tips Writing the Literature Review */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is a Literature Review? ==&lt;br /&gt;
We shouldn&#039;t summarize everything. Instead, we should be doing a critical analysis/comparison of the all the work in a chosen area. We should cover all of the significant points in said chosen area. &lt;br /&gt;
&lt;br /&gt;
Tips for a literature review with Anil:&lt;br /&gt;
* Try an organize the papers thematically rather than presenting them all linearly (i.e., &amp;quot;This person did this, this person did this, etc.&amp;quot;)&lt;br /&gt;
* Categorize papers into several themes/categories that relate to your topic and present the papers as part of a theme. This makes it easier to compare/contrast similar papers and to see how all the different chunks of knowledge/work relates.&lt;br /&gt;
&lt;br /&gt;
== Point of lit review ==&lt;br /&gt;
&lt;br /&gt;
With a research proposal, you would be asking, &amp;quot;What can be done to plug the holes here?&amp;quot; The point of a literature review, in this sense, is to try and offer new interpretations, theoretical approaches, or other ideas. &lt;br /&gt;
&lt;br /&gt;
A more traditional literature review comprises of providing a critical overview of the current state of research efforts.&lt;br /&gt;
&lt;br /&gt;
A stand alone literature review of articles best fits what we&#039;re doing in this class. This typically involves:&lt;br /&gt;
	* Overview and analysis of the current state of the art in your chosen topic.&lt;br /&gt;
	* Evaluate and compare all the research in this chosen area.&lt;br /&gt;
	* It might be difficult to really show weaknesses and gaps in the area as you tend to need to be an expert in the area to find these gaps. But, you can criticize the overall body of work and say what&#039;s missing and give general areas of future work.&lt;br /&gt;
&lt;br /&gt;
== Structure of a Research Paper ==&lt;br /&gt;
* Abstract&lt;br /&gt;
* Introduction&lt;br /&gt;
* Literature Review (about 1/4 to 1/3 of your thesis, typically, so if you can do the literature review for your thesis in a class, that&#039;s a great bonus)&lt;br /&gt;
* Methods&lt;br /&gt;
* Results&lt;br /&gt;
* Discussion&lt;br /&gt;
* Conclusion&lt;br /&gt;
&lt;br /&gt;
== Finding Sources ==&lt;br /&gt;
&lt;br /&gt;
A good way to go is to first look at tertiary sources (e.g., Wikipedia articles) to get a general idea of the area and what it&#039;s about. Following this, you can go into secondary sources to determine the history and general themes of the area. When you have a better understanding of the area overall, jump into your primary sources and get into the &amp;quot;nitty gritty&amp;quot; of the most current research.&lt;br /&gt;
&lt;br /&gt;
A good way to find articles and papers to cite is &amp;quot;footnote chasing&amp;quot;. In one of the papers you want to include, look at their related work and footnotes. These tend to be good citations that you should look up and include in your work--it helps you find out about the area as a whole, giving you a good base to start on. &lt;br /&gt;
&lt;br /&gt;
== Tips Writing the Literature Review ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 1:&#039;&#039;&#039;&lt;br /&gt;
Have one document that has a list of all your sources. With each source in the document, have a short paragraph/blurb of what the paper is about, how it works, what they did well/didn&#039;t do well, etc. This will show that you understood the papers and will save you having to read them all again later.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 2:&#039;&#039;&#039;&lt;br /&gt;
Once again, thematic organization. Start to classify your papers into a few categories relating to your topic. This makes it easier to do things such as discoursing on the area overall in the introduction, abstract, etc. It also allows you to better organize the paper later and to compare/contrast them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 3:&#039;&#039;&#039;&lt;br /&gt;
Once you have the categories, this is where the paragraphs you&#039;ve written in Step 1 come in handy. You can write an introduction for each of your categories then just dump in the paragraphs from your document. Following this, you can polish your sections and better integrate them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;General Notes:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Have a good introduction, outlining exactly what you&#039;re going to cover, why it&#039;s important, etc.&lt;br /&gt;
* Make sure to keep things high-level--you shouldn&#039;t have to be an expert in the sub-topic you&#039;ve chosen to be able to read and understand your literature review.&lt;br /&gt;
* Make sure to be selective with what your&#039;e including. Include all of the really current things and what is historically relevant for your topic (i.e., either very current work or very important, groundbreaking work for the topic). Intermediary work doesn&#039;t always need to be included. Don&#039;t include citations just to get your citation count up, this actually detracts from your literature review.&lt;br /&gt;
&lt;br /&gt;
== Final Notes ==&lt;br /&gt;
&lt;br /&gt;
* Focus on having a good introduction.&lt;br /&gt;
* Setting up categories properly. &lt;br /&gt;
* Having a good introduction for each category. &lt;br /&gt;
* A good overview of each paper in the category. &lt;br /&gt;
* If possible, a table to summarize everything from a section.&lt;br /&gt;
&lt;br /&gt;
Really avoid coming to the conclusion that this work is great, but it would be really great if this existed. And then you find out that that actually exists. Make sure before you reach a conclusion like this, that it &#039;&#039;&#039;doesn&#039;t&#039;&#039;&#039; exist already, otherwise your work doesn&#039;t look thorough.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_17&amp;diff=18814</id>
		<title>DistOS 2014W Lecture 17</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_17&amp;diff=18814"/>
		<updated>2014-03-13T14:33:35Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* What is a Literature Review? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is a Literature Review? ==&lt;br /&gt;
We shouldn&#039;t summarize everything. Instead, we should be doing a critical analysis/comparison of the all the work in a chosen area. We should cover all of the significant points in said chosen area. &lt;br /&gt;
&lt;br /&gt;
Tips for a literature review with Anil:&lt;br /&gt;
* Try an organize the papers thematically rather than presenting them all linearly (i.e., &amp;quot;This person did this, this person did this, etc.&amp;quot;)&lt;br /&gt;
* Categorize papers into several themes/categories that relate to your topic and present the papers as part of a theme. This makes it easier to compare/contrast similar papers and to see how all the different chunks of knowledge/work relates.&lt;br /&gt;
&lt;br /&gt;
== Point of lit review ==&lt;br /&gt;
&lt;br /&gt;
With a research proposal, you would be asking, &amp;quot;What can be done to plug the holes here?&amp;quot; The point of a literature review, in this sense, is to try and offer new interpretations, theoretical approaches, or other ideas. &lt;br /&gt;
&lt;br /&gt;
A more traditional literature review comprises of providing a critical overview of the current state of research efforts.&lt;br /&gt;
&lt;br /&gt;
A stand alone literature review of articles best fits what we&#039;re doing in this class. This typically involves:&lt;br /&gt;
	* Overview and analysis of the current state of the art in your chosen topic.&lt;br /&gt;
	* Evaluate and compare all the research in this chosen area.&lt;br /&gt;
	* It might be difficult to really show weaknesses and gaps in the area as you tend to need to be an expert in the area to find these gaps. But, you can criticize the overall body of work and say what&#039;s missing and give general areas of future work.&lt;br /&gt;
&lt;br /&gt;
== Structure of a Research Paper ==&lt;br /&gt;
* Abstract&lt;br /&gt;
* Introduction&lt;br /&gt;
* Literature Review (about 1/4 to 1/3 of your thesis, typically, so if you can do the literature review for your thesis in a class, that&#039;s a great bonus)&lt;br /&gt;
* Methods&lt;br /&gt;
* Results&lt;br /&gt;
* Discussion&lt;br /&gt;
* Conclusion&lt;br /&gt;
&lt;br /&gt;
== Finding Sources ==&lt;br /&gt;
&lt;br /&gt;
A good way to go is to first look at tertiary sources (e.g., Wikipedia articles) to get a general idea of the area and what it&#039;s about. Following this, you can go into secondary sources to determine the history and general themes of the area. When you have a better understanding of the area overall, jump into your primary sources and get into the &amp;quot;nitty gritty&amp;quot; of the most current research.&lt;br /&gt;
&lt;br /&gt;
A good way to find articles and papers to cite is &amp;quot;footnote chasing&amp;quot;. In one of the papers you want to include, look at their related work and footnotes. These tend to be good citations that you should look up and include in your work--it helps you find out about the area as a whole, giving you a good base to start on. &lt;br /&gt;
&lt;br /&gt;
== Tips Writing the Literature Review ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 1:&#039;&#039;&#039;&lt;br /&gt;
Have one document that has a list of all your sources. With each source in the document, have a short paragraph/blurb of what the paper is about, how it works, what they did well/didn&#039;t do well, etc. This will show that you understood the papers and will save you having to read them all again later.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 2:&#039;&#039;&#039;&lt;br /&gt;
Once again, thematic organization. Start to classify your papers into a few categories relating to your topic. This makes it easier to do things such as discoursing on the area overall in the introduction, abstract, etc. It also allows you to better organize the paper later and to compare/contrast them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 3:&#039;&#039;&#039;&lt;br /&gt;
Once you have the categories, this is where the paragraphs you&#039;ve written in Step 1 come in handy. You can write an introduction for each of your categories then just dump in the paragraphs from your document. Following this, you can polish your sections and better integrate them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;General Notes:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Have a good introduction, outlining exactly what you&#039;re going to cover, why it&#039;s important, etc.&lt;br /&gt;
* Make sure to keep things high-level--you shouldn&#039;t have to be an expert in the sub-topic you&#039;ve chosen to be able to read and understand your literature review.&lt;br /&gt;
* Make sure to be selective with what your&#039;e including. Include all of the really current things and what is historically relevant for your topic (i.e., either very current work or very important, groundbreaking work for the topic). Intermediary work doesn&#039;t always need to be included. Don&#039;t include citations just to get your citation count up, this actually detracts from your literature review.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_17&amp;diff=18813</id>
		<title>DistOS 2014W Lecture 17</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_17&amp;diff=18813"/>
		<updated>2014-03-13T14:33:12Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Tips Writing the Literature Review */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is a Literature Review? ==&lt;br /&gt;
We shouldn&#039;t summarize everything. Instead, we should be doing a critical analysis/comparison of the all the work in a chosen area. We should cover all of the significant points in said chosen area. &lt;br /&gt;
&lt;br /&gt;
Tips for a literature review with Anil:&lt;br /&gt;
	* Try an organize the papers thematically rather than presenting them all linearly (i.e., &amp;quot;This person did this, this person did this, etc.&amp;quot;)&lt;br /&gt;
	* Categorize papers into several themes/categories that relate to your topic and present the papers as part of a theme. This makes it easier to compare/contrast similar papers and to see how all the different chunks of knowledge/work relates.&lt;br /&gt;
&lt;br /&gt;
== Point of lit review ==&lt;br /&gt;
&lt;br /&gt;
With a research proposal, you would be asking, &amp;quot;What can be done to plug the holes here?&amp;quot; The point of a literature review, in this sense, is to try and offer new interpretations, theoretical approaches, or other ideas. &lt;br /&gt;
&lt;br /&gt;
A more traditional literature review comprises of providing a critical overview of the current state of research efforts.&lt;br /&gt;
&lt;br /&gt;
A stand alone literature review of articles best fits what we&#039;re doing in this class. This typically involves:&lt;br /&gt;
	* Overview and analysis of the current state of the art in your chosen topic.&lt;br /&gt;
	* Evaluate and compare all the research in this chosen area.&lt;br /&gt;
	* It might be difficult to really show weaknesses and gaps in the area as you tend to need to be an expert in the area to find these gaps. But, you can criticize the overall body of work and say what&#039;s missing and give general areas of future work.&lt;br /&gt;
&lt;br /&gt;
== Structure of a Research Paper ==&lt;br /&gt;
* Abstract&lt;br /&gt;
* Introduction&lt;br /&gt;
* Literature Review (about 1/4 to 1/3 of your thesis, typically, so if you can do the literature review for your thesis in a class, that&#039;s a great bonus)&lt;br /&gt;
* Methods&lt;br /&gt;
* Results&lt;br /&gt;
* Discussion&lt;br /&gt;
* Conclusion&lt;br /&gt;
&lt;br /&gt;
== Finding Sources ==&lt;br /&gt;
&lt;br /&gt;
A good way to go is to first look at tertiary sources (e.g., Wikipedia articles) to get a general idea of the area and what it&#039;s about. Following this, you can go into secondary sources to determine the history and general themes of the area. When you have a better understanding of the area overall, jump into your primary sources and get into the &amp;quot;nitty gritty&amp;quot; of the most current research.&lt;br /&gt;
&lt;br /&gt;
A good way to find articles and papers to cite is &amp;quot;footnote chasing&amp;quot;. In one of the papers you want to include, look at their related work and footnotes. These tend to be good citations that you should look up and include in your work--it helps you find out about the area as a whole, giving you a good base to start on. &lt;br /&gt;
&lt;br /&gt;
== Tips Writing the Literature Review ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 1:&#039;&#039;&#039;&lt;br /&gt;
Have one document that has a list of all your sources. With each source in the document, have a short paragraph/blurb of what the paper is about, how it works, what they did well/didn&#039;t do well, etc. This will show that you understood the papers and will save you having to read them all again later.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 2:&#039;&#039;&#039;&lt;br /&gt;
Once again, thematic organization. Start to classify your papers into a few categories relating to your topic. This makes it easier to do things such as discoursing on the area overall in the introduction, abstract, etc. It also allows you to better organize the paper later and to compare/contrast them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 3:&#039;&#039;&#039;&lt;br /&gt;
Once you have the categories, this is where the paragraphs you&#039;ve written in Step 1 come in handy. You can write an introduction for each of your categories then just dump in the paragraphs from your document. Following this, you can polish your sections and better integrate them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;General Notes:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Have a good introduction, outlining exactly what you&#039;re going to cover, why it&#039;s important, etc.&lt;br /&gt;
* Make sure to keep things high-level--you shouldn&#039;t have to be an expert in the sub-topic you&#039;ve chosen to be able to read and understand your literature review.&lt;br /&gt;
* Make sure to be selective with what your&#039;e including. Include all of the really current things and what is historically relevant for your topic (i.e., either very current work or very important, groundbreaking work for the topic). Intermediary work doesn&#039;t always need to be included. Don&#039;t include citations just to get your citation count up, this actually detracts from your literature review.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_17&amp;diff=18812</id>
		<title>DistOS 2014W Lecture 17</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_17&amp;diff=18812"/>
		<updated>2014-03-13T14:30:03Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Structure of a Research Paper */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is a Literature Review? ==&lt;br /&gt;
We shouldn&#039;t summarize everything. Instead, we should be doing a critical analysis/comparison of the all the work in a chosen area. We should cover all of the significant points in said chosen area. &lt;br /&gt;
&lt;br /&gt;
Tips for a literature review with Anil:&lt;br /&gt;
	* Try an organize the papers thematically rather than presenting them all linearly (i.e., &amp;quot;This person did this, this person did this, etc.&amp;quot;)&lt;br /&gt;
	* Categorize papers into several themes/categories that relate to your topic and present the papers as part of a theme. This makes it easier to compare/contrast similar papers and to see how all the different chunks of knowledge/work relates.&lt;br /&gt;
&lt;br /&gt;
== Point of lit review ==&lt;br /&gt;
&lt;br /&gt;
With a research proposal, you would be asking, &amp;quot;What can be done to plug the holes here?&amp;quot; The point of a literature review, in this sense, is to try and offer new interpretations, theoretical approaches, or other ideas. &lt;br /&gt;
&lt;br /&gt;
A more traditional literature review comprises of providing a critical overview of the current state of research efforts.&lt;br /&gt;
&lt;br /&gt;
A stand alone literature review of articles best fits what we&#039;re doing in this class. This typically involves:&lt;br /&gt;
	* Overview and analysis of the current state of the art in your chosen topic.&lt;br /&gt;
	* Evaluate and compare all the research in this chosen area.&lt;br /&gt;
	* It might be difficult to really show weaknesses and gaps in the area as you tend to need to be an expert in the area to find these gaps. But, you can criticize the overall body of work and say what&#039;s missing and give general areas of future work.&lt;br /&gt;
&lt;br /&gt;
== Structure of a Research Paper ==&lt;br /&gt;
* Abstract&lt;br /&gt;
* Introduction&lt;br /&gt;
* Literature Review (about 1/4 to 1/3 of your thesis, typically, so if you can do the literature review for your thesis in a class, that&#039;s a great bonus)&lt;br /&gt;
* Methods&lt;br /&gt;
* Results&lt;br /&gt;
* Discussion&lt;br /&gt;
* Conclusion&lt;br /&gt;
&lt;br /&gt;
== Finding Sources ==&lt;br /&gt;
&lt;br /&gt;
A good way to go is to first look at tertiary sources (e.g., Wikipedia articles) to get a general idea of the area and what it&#039;s about. Following this, you can go into secondary sources to determine the history and general themes of the area. When you have a better understanding of the area overall, jump into your primary sources and get into the &amp;quot;nitty gritty&amp;quot; of the most current research.&lt;br /&gt;
&lt;br /&gt;
A good way to find articles and papers to cite is &amp;quot;footnote chasing&amp;quot;. In one of the papers you want to include, look at their related work and footnotes. These tend to be good citations that you should look up and include in your work--it helps you find out about the area as a whole, giving you a good base to start on. &lt;br /&gt;
&lt;br /&gt;
== Tips Writing the Literature Review ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 1:&#039;&#039;&#039;&lt;br /&gt;
Have one document that has a list of all your sources. With each source in the document, have a short paragraph/blurb of what the paper is about, how it works, what they did well/didn&#039;t do well, etc. This will show that you understood the papers and will save you having to read them all again later.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 2:&#039;&#039;&#039;&lt;br /&gt;
Once again, thematic organization. Start to classify your papers into a few categories relating to your topic. This makes it easier to do things such as discoursing on the area overall in the introduction, abstract, etc. It also allows you to better organize the paper later and to compare/contrast them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 3:&#039;&#039;&#039;&lt;br /&gt;
Once you have the categories, this is where the paragraphs you&#039;ve written in Step 1 come in handy. You can write an introduction for each of your categories then just dump in the paragraphs from your document. Following this, you can polish your sections and better integrate them.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_17&amp;diff=18811</id>
		<title>DistOS 2014W Lecture 17</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_17&amp;diff=18811"/>
		<updated>2014-03-13T14:29:20Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Tips Writing the Literature Review */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is a Literature Review? ==&lt;br /&gt;
We shouldn&#039;t summarize everything. Instead, we should be doing a critical analysis/comparison of the all the work in a chosen area. We should cover all of the significant points in said chosen area. &lt;br /&gt;
&lt;br /&gt;
Tips for a literature review with Anil:&lt;br /&gt;
	* Try an organize the papers thematically rather than presenting them all linearly (i.e., &amp;quot;This person did this, this person did this, etc.&amp;quot;)&lt;br /&gt;
	* Categorize papers into several themes/categories that relate to your topic and present the papers as part of a theme. This makes it easier to compare/contrast similar papers and to see how all the different chunks of knowledge/work relates.&lt;br /&gt;
&lt;br /&gt;
== Point of lit review ==&lt;br /&gt;
&lt;br /&gt;
With a research proposal, you would be asking, &amp;quot;What can be done to plug the holes here?&amp;quot; The point of a literature review, in this sense, is to try and offer new interpretations, theoretical approaches, or other ideas. &lt;br /&gt;
&lt;br /&gt;
A more traditional literature review comprises of providing a critical overview of the current state of research efforts.&lt;br /&gt;
&lt;br /&gt;
A stand alone literature review of articles best fits what we&#039;re doing in this class. This typically involves:&lt;br /&gt;
	* Overview and analysis of the current state of the art in your chosen topic.&lt;br /&gt;
	* Evaluate and compare all the research in this chosen area.&lt;br /&gt;
	* It might be difficult to really show weaknesses and gaps in the area as you tend to need to be an expert in the area to find these gaps. But, you can criticize the overall body of work and say what&#039;s missing and give general areas of future work.&lt;br /&gt;
&lt;br /&gt;
== Structure of a Research Paper ==&lt;br /&gt;
	* Abstract&lt;br /&gt;
	* Introduction&lt;br /&gt;
	* Literature Review (about 1/4 to 1/3 of your thesis, typically, so if you can do the literature review for your thesis in a class, that&#039;s a great bonus)&lt;br /&gt;
	* Methods&lt;br /&gt;
	* Results&lt;br /&gt;
	* Discussion&lt;br /&gt;
	* Conclusion&lt;br /&gt;
&lt;br /&gt;
== Finding Sources ==&lt;br /&gt;
&lt;br /&gt;
A good way to go is to first look at tertiary sources (e.g., Wikipedia articles) to get a general idea of the area and what it&#039;s about. Following this, you can go into secondary sources to determine the history and general themes of the area. When you have a better understanding of the area overall, jump into your primary sources and get into the &amp;quot;nitty gritty&amp;quot; of the most current research.&lt;br /&gt;
&lt;br /&gt;
A good way to find articles and papers to cite is &amp;quot;footnote chasing&amp;quot;. In one of the papers you want to include, look at their related work and footnotes. These tend to be good citations that you should look up and include in your work--it helps you find out about the area as a whole, giving you a good base to start on. &lt;br /&gt;
&lt;br /&gt;
== Tips Writing the Literature Review ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 1:&#039;&#039;&#039;&lt;br /&gt;
Have one document that has a list of all your sources. With each source in the document, have a short paragraph/blurb of what the paper is about, how it works, what they did well/didn&#039;t do well, etc. This will show that you understood the papers and will save you having to read them all again later.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 2:&#039;&#039;&#039;&lt;br /&gt;
Once again, thematic organization. Start to classify your papers into a few categories relating to your topic. This makes it easier to do things such as discoursing on the area overall in the introduction, abstract, etc. It also allows you to better organize the paper later and to compare/contrast them.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 3:&#039;&#039;&#039;&lt;br /&gt;
Once you have the categories, this is where the paragraphs you&#039;ve written in Step 1 come in handy. You can write an introduction for each of your categories then just dump in the paragraphs from your document. Following this, you can polish your sections and better integrate them.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_17&amp;diff=18810</id>
		<title>DistOS 2014W Lecture 17</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_17&amp;diff=18810"/>
		<updated>2014-03-13T14:27:46Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: Created page with &amp;quot;== What is a Literature Review? == We shouldn&amp;#039;t summarize everything. Instead, we should be doing a critical analysis/comparison of the all the work in a chosen area. We shoul...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is a Literature Review? ==&lt;br /&gt;
We shouldn&#039;t summarize everything. Instead, we should be doing a critical analysis/comparison of the all the work in a chosen area. We should cover all of the significant points in said chosen area. &lt;br /&gt;
&lt;br /&gt;
Tips for a literature review with Anil:&lt;br /&gt;
	* Try an organize the papers thematically rather than presenting them all linearly (i.e., &amp;quot;This person did this, this person did this, etc.&amp;quot;)&lt;br /&gt;
	* Categorize papers into several themes/categories that relate to your topic and present the papers as part of a theme. This makes it easier to compare/contrast similar papers and to see how all the different chunks of knowledge/work relates.&lt;br /&gt;
&lt;br /&gt;
== Point of lit review ==&lt;br /&gt;
&lt;br /&gt;
With a research proposal, you would be asking, &amp;quot;What can be done to plug the holes here?&amp;quot; The point of a literature review, in this sense, is to try and offer new interpretations, theoretical approaches, or other ideas. &lt;br /&gt;
&lt;br /&gt;
A more traditional literature review comprises of providing a critical overview of the current state of research efforts.&lt;br /&gt;
&lt;br /&gt;
A stand alone literature review of articles best fits what we&#039;re doing in this class. This typically involves:&lt;br /&gt;
	* Overview and analysis of the current state of the art in your chosen topic.&lt;br /&gt;
	* Evaluate and compare all the research in this chosen area.&lt;br /&gt;
	* It might be difficult to really show weaknesses and gaps in the area as you tend to need to be an expert in the area to find these gaps. But, you can criticize the overall body of work and say what&#039;s missing and give general areas of future work.&lt;br /&gt;
&lt;br /&gt;
== Structure of a Research Paper ==&lt;br /&gt;
	* Abstract&lt;br /&gt;
	* Introduction&lt;br /&gt;
	* Literature Review (about 1/4 to 1/3 of your thesis, typically, so if you can do the literature review for your thesis in a class, that&#039;s a great bonus)&lt;br /&gt;
	* Methods&lt;br /&gt;
	* Results&lt;br /&gt;
	* Discussion&lt;br /&gt;
	* Conclusion&lt;br /&gt;
&lt;br /&gt;
== Finding Sources ==&lt;br /&gt;
&lt;br /&gt;
A good way to go is to first look at tertiary sources (e.g., Wikipedia articles) to get a general idea of the area and what it&#039;s about. Following this, you can go into secondary sources to determine the history and general themes of the area. When you have a better understanding of the area overall, jump into your primary sources and get into the &amp;quot;nitty gritty&amp;quot; of the most current research.&lt;br /&gt;
&lt;br /&gt;
A good way to find articles and papers to cite is &amp;quot;footnote chasing&amp;quot;. In one of the papers you want to include, look at their related work and footnotes. These tend to be good citations that you should look up and include in your work--it helps you find out about the area as a whole, giving you a good base to start on. &lt;br /&gt;
&lt;br /&gt;
== Tips Writing the Literature Review ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 1:&#039;&#039;&#039;&lt;br /&gt;
Have one document that has a list of all your sources. With each source in the document, have a short paragraph/blurb of what the paper is about, how it works, what they did well/didn&#039;t do well, etc. This will show that you understood the papers and will save you having to read them all again later.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step 2:&#039;&#039;&#039;&lt;br /&gt;
Once again, thematic organization. Start to classify your papers into a few categories relating to your topic. This makes it easier to do things such as discoursing on the area overall in the introduction, abstract, etc. It also allows you to better organize the paper later and to compare/contrast them.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18809</id>
		<title>DistOS 2014W Lecture 16</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18809"/>
		<updated>2014-03-13T02:52:06Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Embarrassingly Parallell */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Public Resource Computing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Outline for upcoming lectures ==&lt;br /&gt;
&lt;br /&gt;
All the papers that would be covered in upcoming lectures have been posted on Wiki. These papers will be more difficult in comparison to the papers we have covered so far, so we should be prepared to allot more time for studying these papers and come prepared in class. We may abandon the way of discussing the papers in group and instead everyone would ask the questions about what,they did not understand from paper so it would allow us to discuss the technical detail better.&lt;br /&gt;
Professor will not be taking the next class, instead our TA would discuss the two papers on how to conduct a literature survey, which should help with our projects. &lt;br /&gt;
The rest of the papers will deal with many closely related systems. In particular, we will be looking at distributed hash tables and systems that use distributed hash tables.&lt;br /&gt;
&lt;br /&gt;
After looking at the material from today, we will also be looking at how we can get the kind of distribution that we get with public resource computing, but with greater flexibility.&lt;br /&gt;
&lt;br /&gt;
== Project proposal==&lt;br /&gt;
There were 11 proposals and out of which professor found 4 to be in the state of getting accepted and has graded them 10/10. professor has mailed to everyone with the feedback about the project proposal so that we can incorporate those comments and submit the project proposals by coming Saturday ( the extended deadline). the deadline has been extended so that every one can work out the flaws in their proposal and get the best grades (10/10).&lt;br /&gt;
Project Presentation are to be held on 1st and 3rd april. People who got 10/10 should be ready to present on Tuesday as they are ahead and better prepared for it, there should be 6 presentation on Tuesday and rest on Thursday.   &lt;br /&gt;
Under-grad will have their final exam on 24th April. 24th April is also the date to turn-in the final project report.&lt;br /&gt;
&lt;br /&gt;
== Public Resource Computing  ==&lt;br /&gt;
&lt;br /&gt;
The paper assigned for readings were on SETI and BOINC. BOINC is the system SETI is built upon, there are other projects running on the same system like Folding@home etc. In particular, we want to discuss the following:&lt;br /&gt;
What is public resource computing? How does public resource computing relate to the various computational models and systems that we have seen this semester? How are they similar in design, purpose, and technologies? How is it different?&lt;br /&gt;
 &lt;br /&gt;
The main purpose of public resource computing was to have a universally accessible, easy-to-use, way of sharing resources. This is interesting as it differs from some of the systems we have looked at that deal with the sharing of information rather than resources. &lt;br /&gt;
&lt;br /&gt;
For computational parallelism, you need a highly parallel problem. SETI@home and folding@home give examples of such problems. In public resource computing, particularly with the BOINC system, you divide the problem into work units. People voluntarily install the clients on their machines, running the program to work on work units that are sent to their clients in return for credits. In the past, it has been institutes, such as universities, running services with other people connecting in to use said service. Public resource computing turns this use case on its head, having the institute (e.g., the university) being the one using the service while other people are contributing to said service voluntarily. In the files systems we have covered so far, people would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine. Since they are contributing voluntarily, how do you make these users care about the system if something were to happen? The gamification of the system causes many users to become invested in the system. People are doing work for credits and those with the most credits are showcased as major contributors. They can also see the amount of resources (e.g., process cycles) they have devoted to the cause on the GUI of the installed client. When the client produces results for the work unit it was processing, it sends the result to the server.&lt;br /&gt;
&lt;br /&gt;
For fault tolerance, such as malicious clients are faulty processors, redundant computing is done. Work units are processed multiple times.&lt;br /&gt;
Work units are later taken off of the clients as dictated by the following two cases:&lt;br /&gt;
# They receive the number of results, &#039;&#039;&#039;n&#039;&#039;&#039;, for a certain work unit, they take the answer that the majority gave.&lt;br /&gt;
# They have transmitted a work unit &#039;&#039;&#039;m&#039;&#039;&#039; times and have not gotten back the n expected responses. &lt;br /&gt;
It should be noted that, in doing this, it is possible that some work units are never processed. The probability of this happening can be reduced by increasing the value of &#039;&#039;&#039;m&#039;&#039;&#039;, though.&lt;br /&gt;
&lt;br /&gt;
=== General Discussion ===&lt;br /&gt;
So, given all this, how would we generally define public resource computing/public interest computing? It is essentially using the public as a resource--you are voluntarily giving up your extra compute cycles for projects (this is a little like donating blood--public resource computing is a vampire). Looking at public resource computing like this, we can contrast it with a botnet. What is the difference? Both system are utilizing client machines to perform/aid in some task. The answer: consent. You are consensually contributing to a project rather than being (unknowingly) forced to. Other differences are the ends/resources that you want as well as reliability. With a botnet, you can trust that a higher proportion of your users are following your commands exactly (as they have no idea they are performing them). Whereas, in public resource computing, how can you guarantee that clients are doing what you want?&lt;br /&gt;
  &lt;br /&gt;
=== Comparisons ===&lt;br /&gt;
Basic Comparison with other File systems , we have covered so far -&lt;br /&gt;
&lt;br /&gt;
# Use-Cases have been turned on their head. In the files systems we have covered so far, People would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine.&lt;br /&gt;
# In other file systems it was about many clients sharing the data, here it is more about sharing the processing power. In Folding@home, the system can store some of its data at client&#039;s storage but that is not the public resource computing&#039;s main focus.&lt;br /&gt;
# It is nothing like systems like OceanStore where there is no centralized authority, in BOINC the master/slave relation between the centralized server and the clients installed across users&#039; machine can still be visualized and it is more like GFS in that sense because GFS also had a centralized metadata server.&lt;br /&gt;
# Public resource systems are like BOTNETs but people install these clients with consent and there is no need for communication between the clients ( it is not peer to peer network). It could be made to communicate at peer to peer level but it would risk security as clients are not trusted in the network&lt;br /&gt;
# Skype was modelled much like a public resource computing network (before Microsoft took over). The whole model of Skype was that the infrastructure just ran on the computers of those who had downloaded the clients (like a consensual botnet). Once a person downloaded the client, they would be a part of this system. As with public resource computing, you would donate some of your resources in order to support the distributed infrastructure. It was also not assumed that everyone was reliable, but would assume that some people are reliable some of the time. The network would choose super nodes to act as routers. These super nodes would be the machines with higher reliability and better processing powers. After MS&#039; takeover the supernodes have been centralized and the election of supernodes functionality has been removed from the system.&lt;br /&gt;
&lt;br /&gt;
=== Trust Model and Fault Tolerance ===&lt;br /&gt;
&lt;br /&gt;
In this central model, you have a central resource and distribute work to clients who proess the work and send back results. Once they do, you can send them more work. In this model, can you trust the client to complete the computation successfully? The answer is not necessarily--there could be untrustworthy clients sending back rubbish answers.&lt;br /&gt;
&lt;br /&gt;
So, how does SETI address the questions of fault tolerance ? They use replication for reliability and redundant computing. Work units are assigned to multiple clients and the results that are returned to server can be analyzed to find the outliers in order to detect the malicious users but that addresses the situations of fault tolerance from client perspective. &lt;br /&gt;
&lt;br /&gt;
However, SETI has a centralized server, which can go down and when it does, it uses exponential back off to push back the clients and ask them to wait before sending the result again. But, whenever a server comes back up many clients may try to access the server at once and may crash the server once again--essentially, the server will have manufactured its own DDOS attack due to the server&#039;s own inadequacies.The Exponential back-off approach is similar to the one adopted in resolving the TCP congestion.  It can be noted that there is almost no reliability engineering here, though. These are just standard servers running with one backup that is manually failed over to. This can give an idea of how asymmetric the relationship is. &lt;br /&gt;
&lt;br /&gt;
One reason that this might be is to look at the actual service and who is running it. Reliability for a service is high when a high amount of people use the service and, hence, would be upset were the service to go down. In this case, it&#039;s the university using the service and clients are helping out by providing resources. If the service goes down, it is the university&#039;s fault and they can individually deal with it. It is interesting to compare this strategy to highly reliable systems like Ceph or Oceanstore, which could recover the data in case a node crashes.&lt;br /&gt;
&lt;br /&gt;
The idea of redundancy relates to Oceanstore a little, but how would Oceanstore map onto this idea of public resource computing? In place of the Oceanstore metadata cluster, there is a central server. In place of the data store, there are machines doing computation. Specifically, mapping on this model of public resource computing is the notion of having one central thing and a bunch of outlying nodes. This is very much a master/slave relationship, though it is a voluntary one. In this relationship, CPU cycles are cheap, but bandwidth is expensive, hence showing why work units are sent infrequently. The storage is in-between--sometimes data is pushed to the clients. When this is done, the resemblance of public resource computing to Oceanstore is stronger.&lt;br /&gt;
&lt;br /&gt;
=== Embarrassingly Parallell ===&lt;br /&gt;
&lt;br /&gt;
When you are doing parallel computations, you have to do a imxture of computation and communication. You&#039;re doing computation separately, but you always have to do some communication. But, how much communication do you have to do for every unit of computation? In some cases there are many dependencies meaning that a high amount of communication is required (e.g., weather system simulations).&lt;br /&gt;
&lt;br /&gt;
Embarrassingly parallel means that a given problem requires a minimum of communication between the pieces of work. This typically means that you have a bunch of data that you want to analyze, and it&#039;s all independent. Due to this, you can just split up and distribute the work for analysis. In an embarrassingly parallel problem, computations are trivial, due to the minimum of communication, as the more processors you add, the faster the system will run. However, problems that are not embarrassingly parallel, the system can actually slow down when more processors are added as more communication is required. With distributed systems, you either need to accept communications costs or modify abstractions to allow you to get closer to an embarrassingly parallel system. Since speedup is trivial when the problem is embarrassingly parallel, you don&#039;t get much praise for doing it.&lt;br /&gt;
&lt;br /&gt;
SETI is an example of an &amp;quot;embarrassingly parallel&amp;quot; workload. The inherent nature of the problem lends itself to be divided into work-units and be computed in-parallel without any need to consolidate the results. It is called &amp;quot;embarrassingly parallel&amp;quot; because there is little to no effort required to distribute the work load in parallel.  &lt;br /&gt;
&lt;br /&gt;
One more example of &amp;quot;embarrassingly parallel&amp;quot; in what we have covered so far could be web-indexing in GFS. Any file system that we have discussed so far, which doesn&#039;t trust the clients can be modelled to work as public sharing system.&lt;br /&gt;
&lt;br /&gt;
Note: Public resource computing is also very similar to mapreduce, which we will be discussing later in the course. Make sure to keep public resource computing in mind when we reach this.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18808</id>
		<title>DistOS 2014W Lecture 16</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18808"/>
		<updated>2014-03-13T02:44:40Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Comparisons */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Public Resource Computing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Outline for upcoming lectures ==&lt;br /&gt;
&lt;br /&gt;
All the papers that would be covered in upcoming lectures have been posted on Wiki. These papers will be more difficult in comparison to the papers we have covered so far, so we should be prepared to allot more time for studying these papers and come prepared in class. We may abandon the way of discussing the papers in group and instead everyone would ask the questions about what,they did not understand from paper so it would allow us to discuss the technical detail better.&lt;br /&gt;
Professor will not be taking the next class, instead our TA would discuss the two papers on how to conduct a literature survey, which should help with our projects. &lt;br /&gt;
The rest of the papers will deal with many closely related systems. In particular, we will be looking at distributed hash tables and systems that use distributed hash tables.&lt;br /&gt;
&lt;br /&gt;
After looking at the material from today, we will also be looking at how we can get the kind of distribution that we get with public resource computing, but with greater flexibility.&lt;br /&gt;
&lt;br /&gt;
== Project proposal==&lt;br /&gt;
There were 11 proposals and out of which professor found 4 to be in the state of getting accepted and has graded them 10/10. professor has mailed to everyone with the feedback about the project proposal so that we can incorporate those comments and submit the project proposals by coming Saturday ( the extended deadline). the deadline has been extended so that every one can work out the flaws in their proposal and get the best grades (10/10).&lt;br /&gt;
Project Presentation are to be held on 1st and 3rd april. People who got 10/10 should be ready to present on Tuesday as they are ahead and better prepared for it, there should be 6 presentation on Tuesday and rest on Thursday.   &lt;br /&gt;
Under-grad will have their final exam on 24th April. 24th April is also the date to turn-in the final project report.&lt;br /&gt;
&lt;br /&gt;
== Public Resource Computing  ==&lt;br /&gt;
&lt;br /&gt;
The paper assigned for readings were on SETI and BOINC. BOINC is the system SETI is built upon, there are other projects running on the same system like Folding@home etc. In particular, we want to discuss the following:&lt;br /&gt;
What is public resource computing? How does public resource computing relate to the various computational models and systems that we have seen this semester? How are they similar in design, purpose, and technologies? How is it different?&lt;br /&gt;
 &lt;br /&gt;
The main purpose of public resource computing was to have a universally accessible, easy-to-use, way of sharing resources. This is interesting as it differs from some of the systems we have looked at that deal with the sharing of information rather than resources. &lt;br /&gt;
&lt;br /&gt;
For computational parallelism, you need a highly parallel problem. SETI@home and folding@home give examples of such problems. In public resource computing, particularly with the BOINC system, you divide the problem into work units. People voluntarily install the clients on their machines, running the program to work on work units that are sent to their clients in return for credits. In the past, it has been institutes, such as universities, running services with other people connecting in to use said service. Public resource computing turns this use case on its head, having the institute (e.g., the university) being the one using the service while other people are contributing to said service voluntarily. In the files systems we have covered so far, people would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine. Since they are contributing voluntarily, how do you make these users care about the system if something were to happen? The gamification of the system causes many users to become invested in the system. People are doing work for credits and those with the most credits are showcased as major contributors. They can also see the amount of resources (e.g., process cycles) they have devoted to the cause on the GUI of the installed client. When the client produces results for the work unit it was processing, it sends the result to the server.&lt;br /&gt;
&lt;br /&gt;
For fault tolerance, such as malicious clients are faulty processors, redundant computing is done. Work units are processed multiple times.&lt;br /&gt;
Work units are later taken off of the clients as dictated by the following two cases:&lt;br /&gt;
# They receive the number of results, &#039;&#039;&#039;n&#039;&#039;&#039;, for a certain work unit, they take the answer that the majority gave.&lt;br /&gt;
# They have transmitted a work unit &#039;&#039;&#039;m&#039;&#039;&#039; times and have not gotten back the n expected responses. &lt;br /&gt;
It should be noted that, in doing this, it is possible that some work units are never processed. The probability of this happening can be reduced by increasing the value of &#039;&#039;&#039;m&#039;&#039;&#039;, though.&lt;br /&gt;
&lt;br /&gt;
=== General Discussion ===&lt;br /&gt;
So, given all this, how would we generally define public resource computing/public interest computing? It is essentially using the public as a resource--you are voluntarily giving up your extra compute cycles for projects (this is a little like donating blood--public resource computing is a vampire). Looking at public resource computing like this, we can contrast it with a botnet. What is the difference? Both system are utilizing client machines to perform/aid in some task. The answer: consent. You are consensually contributing to a project rather than being (unknowingly) forced to. Other differences are the ends/resources that you want as well as reliability. With a botnet, you can trust that a higher proportion of your users are following your commands exactly (as they have no idea they are performing them). Whereas, in public resource computing, how can you guarantee that clients are doing what you want?&lt;br /&gt;
  &lt;br /&gt;
=== Comparisons ===&lt;br /&gt;
Basic Comparison with other File systems , we have covered so far -&lt;br /&gt;
&lt;br /&gt;
# Use-Cases have been turned on their head. In the files systems we have covered so far, People would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine.&lt;br /&gt;
# In other file systems it was about many clients sharing the data, here it is more about sharing the processing power. In Folding@home, the system can store some of its data at client&#039;s storage but that is not the public resource computing&#039;s main focus.&lt;br /&gt;
# It is nothing like systems like OceanStore where there is no centralized authority, in BOINC the master/slave relation between the centralized server and the clients installed across users&#039; machine can still be visualized and it is more like GFS in that sense because GFS also had a centralized metadata server.&lt;br /&gt;
# Public resource systems are like BOTNETs but people install these clients with consent and there is no need for communication between the clients ( it is not peer to peer network). It could be made to communicate at peer to peer level but it would risk security as clients are not trusted in the network&lt;br /&gt;
# Skype was modelled much like a public resource computing network (before Microsoft took over). The whole model of Skype was that the infrastructure just ran on the computers of those who had downloaded the clients (like a consensual botnet). Once a person downloaded the client, they would be a part of this system. As with public resource computing, you would donate some of your resources in order to support the distributed infrastructure. It was also not assumed that everyone was reliable, but would assume that some people are reliable some of the time. The network would choose super nodes to act as routers. These super nodes would be the machines with higher reliability and better processing powers. After MS&#039; takeover the supernodes have been centralized and the election of supernodes functionality has been removed from the system.&lt;br /&gt;
&lt;br /&gt;
=== Trust Model and Fault Tolerance ===&lt;br /&gt;
&lt;br /&gt;
In this central model, you have a central resource and distribute work to clients who proess the work and send back results. Once they do, you can send them more work. In this model, can you trust the client to complete the computation successfully? The answer is not necessarily--there could be untrustworthy clients sending back rubbish answers.&lt;br /&gt;
&lt;br /&gt;
So, how does SETI address the questions of fault tolerance ? They use replication for reliability and redundant computing. Work units are assigned to multiple clients and the results that are returned to server can be analyzed to find the outliers in order to detect the malicious users but that addresses the situations of fault tolerance from client perspective. &lt;br /&gt;
&lt;br /&gt;
However, SETI has a centralized server, which can go down and when it does, it uses exponential back off to push back the clients and ask them to wait before sending the result again. But, whenever a server comes back up many clients may try to access the server at once and may crash the server once again--essentially, the server will have manufactured its own DDOS attack due to the server&#039;s own inadequacies.The Exponential back-off approach is similar to the one adopted in resolving the TCP congestion.  It can be noted that there is almost no reliability engineering here, though. These are just standard servers running with one backup that is manually failed over to. This can give an idea of how asymmetric the relationship is. &lt;br /&gt;
&lt;br /&gt;
One reason that this might be is to look at the actual service and who is running it. Reliability for a service is high when a high amount of people use the service and, hence, would be upset were the service to go down. In this case, it&#039;s the university using the service and clients are helping out by providing resources. If the service goes down, it is the university&#039;s fault and they can individually deal with it. It is interesting to compare this strategy to highly reliable systems like Ceph or Oceanstore, which could recover the data in case a node crashes.&lt;br /&gt;
&lt;br /&gt;
The idea of redundancy relates to Oceanstore a little, but how would Oceanstore map onto this idea of public resource computing? In place of the Oceanstore metadata cluster, there is a central server. In place of the data store, there are machines doing computation. Specifically, mapping on this model of public resource computing is the notion of having one central thing and a bunch of outlying nodes. This is very much a master/slave relationship, though it is a voluntary one. In this relationship, CPU cycles are cheap, but bandwidth is expensive, hence showing why work units are sent infrequently. The storage is in-between--sometimes data is pushed to the clients. When this is done, the resemblance of public resource computing to Oceanstore is stronger.&lt;br /&gt;
&lt;br /&gt;
=== Embarrassingly Parallell ===&lt;br /&gt;
&lt;br /&gt;
SETI is an example of &amp;quot;embarrassingly parallel&amp;quot; workload where the problem has inherent nature to lend itself to be divided into work-units and be computed in-parallel without any need to consolidate the results, it is called &amp;quot;embarrassingly parallel&amp;quot; because there is little to no efforts required to distribute the work load in parallel and you don&#039;t get much praise for doing it. one more example of &amp;quot;embarrassingly parallel&amp;quot; from the file systems that we have covered so far could be web-indexing in GFS. Any file system that we have discussed so far, which doesnt trust the clients can be modeled to work as public sharing system.&lt;br /&gt;
&lt;br /&gt;
Note: Public resource computing is also very similar to mapreduce, which we will be discussing later in the course. Make sure to keep public resource computing in mind when we reach this.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18807</id>
		<title>DistOS 2014W Lecture 16</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18807"/>
		<updated>2014-03-13T02:42:44Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Trust Model and Fault Tolerance */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Public Resource Computing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Outline for upcoming lectures ==&lt;br /&gt;
&lt;br /&gt;
All the papers that would be covered in upcoming lectures have been posted on Wiki. These papers will be more difficult in comparison to the papers we have covered so far, so we should be prepared to allot more time for studying these papers and come prepared in class. We may abandon the way of discussing the papers in group and instead everyone would ask the questions about what,they did not understand from paper so it would allow us to discuss the technical detail better.&lt;br /&gt;
Professor will not be taking the next class, instead our TA would discuss the two papers on how to conduct a literature survey, which should help with our projects. &lt;br /&gt;
The rest of the papers will deal with many closely related systems. In particular, we will be looking at distributed hash tables and systems that use distributed hash tables.&lt;br /&gt;
&lt;br /&gt;
After looking at the material from today, we will also be looking at how we can get the kind of distribution that we get with public resource computing, but with greater flexibility.&lt;br /&gt;
&lt;br /&gt;
== Project proposal==&lt;br /&gt;
There were 11 proposals and out of which professor found 4 to be in the state of getting accepted and has graded them 10/10. professor has mailed to everyone with the feedback about the project proposal so that we can incorporate those comments and submit the project proposals by coming Saturday ( the extended deadline). the deadline has been extended so that every one can work out the flaws in their proposal and get the best grades (10/10).&lt;br /&gt;
Project Presentation are to be held on 1st and 3rd april. People who got 10/10 should be ready to present on Tuesday as they are ahead and better prepared for it, there should be 6 presentation on Tuesday and rest on Thursday.   &lt;br /&gt;
Under-grad will have their final exam on 24th April. 24th April is also the date to turn-in the final project report.&lt;br /&gt;
&lt;br /&gt;
== Public Resource Computing  ==&lt;br /&gt;
&lt;br /&gt;
The paper assigned for readings were on SETI and BOINC. BOINC is the system SETI is built upon, there are other projects running on the same system like Folding@home etc. In particular, we want to discuss the following:&lt;br /&gt;
What is public resource computing? How does public resource computing relate to the various computational models and systems that we have seen this semester? How are they similar in design, purpose, and technologies? How is it different?&lt;br /&gt;
 &lt;br /&gt;
The main purpose of public resource computing was to have a universally accessible, easy-to-use, way of sharing resources. This is interesting as it differs from some of the systems we have looked at that deal with the sharing of information rather than resources. &lt;br /&gt;
&lt;br /&gt;
For computational parallelism, you need a highly parallel problem. SETI@home and folding@home give examples of such problems. In public resource computing, particularly with the BOINC system, you divide the problem into work units. People voluntarily install the clients on their machines, running the program to work on work units that are sent to their clients in return for credits. In the past, it has been institutes, such as universities, running services with other people connecting in to use said service. Public resource computing turns this use case on its head, having the institute (e.g., the university) being the one using the service while other people are contributing to said service voluntarily. In the files systems we have covered so far, people would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine. Since they are contributing voluntarily, how do you make these users care about the system if something were to happen? The gamification of the system causes many users to become invested in the system. People are doing work for credits and those with the most credits are showcased as major contributors. They can also see the amount of resources (e.g., process cycles) they have devoted to the cause on the GUI of the installed client. When the client produces results for the work unit it was processing, it sends the result to the server.&lt;br /&gt;
&lt;br /&gt;
For fault tolerance, such as malicious clients are faulty processors, redundant computing is done. Work units are processed multiple times.&lt;br /&gt;
Work units are later taken off of the clients as dictated by the following two cases:&lt;br /&gt;
# They receive the number of results, &#039;&#039;&#039;n&#039;&#039;&#039;, for a certain work unit, they take the answer that the majority gave.&lt;br /&gt;
# They have transmitted a work unit &#039;&#039;&#039;m&#039;&#039;&#039; times and have not gotten back the n expected responses. &lt;br /&gt;
It should be noted that, in doing this, it is possible that some work units are never processed. The probability of this happening can be reduced by increasing the value of &#039;&#039;&#039;m&#039;&#039;&#039;, though.&lt;br /&gt;
&lt;br /&gt;
=== General Discussion ===&lt;br /&gt;
So, given all this, how would we generally define public resource computing/public interest computing? It is essentially using the public as a resource--you are voluntarily giving up your extra compute cycles for projects (this is a little like donating blood--public resource computing is a vampire). Looking at public resource computing like this, we can contrast it with a botnet. What is the difference? Both system are utilizing client machines to perform/aid in some task. The answer: consent. You are consensually contributing to a project rather than being (unknowingly) forced to. Other differences are the ends/resources that you want as well as reliability. With a botnet, you can trust that a higher proportion of your users are following your commands exactly (as they have no idea they are performing them). Whereas, in public resource computing, how can you guarantee that clients are doing what you want?&lt;br /&gt;
  &lt;br /&gt;
=== Comparisons ===&lt;br /&gt;
Basic Comparison with other File systems , we have covered so far -&lt;br /&gt;
&lt;br /&gt;
# Use-Cases have been turned on their head. In the files systems we have covered so far, People would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine.&lt;br /&gt;
# In other file systems it was about many clients sharing the data, here it is more about sharing the processing power. In Folding@home, the system can store some of its data at client&#039;s storage but that is not the public resource computing&#039;s main focus.&lt;br /&gt;
# It is nothing like systems like OceanStore where there is no centralized authority, in BOINC the master/slave relation between the centralized server and the clients installed across users&#039; machine can still be visualized and it is more like GFS in that sense because GFS also had a centralized metadata server.&lt;br /&gt;
# Public resource systems are like BOTNETs but people install these clients with consent and there is no need for communication between the clients ( it is not peer to peer network). It could be made to communicate at peer to peer level but it would risk security as clients are not trusted in the network&lt;br /&gt;
&lt;br /&gt;
=== Trust Model and Fault Tolerance ===&lt;br /&gt;
&lt;br /&gt;
In this central model, you have a central resource and distribute work to clients who proess the work and send back results. Once they do, you can send them more work. In this model, can you trust the client to complete the computation successfully? The answer is not necessarily--there could be untrustworthy clients sending back rubbish answers.&lt;br /&gt;
&lt;br /&gt;
So, how does SETI address the questions of fault tolerance ? They use replication for reliability and redundant computing. Work units are assigned to multiple clients and the results that are returned to server can be analyzed to find the outliers in order to detect the malicious users but that addresses the situations of fault tolerance from client perspective. &lt;br /&gt;
&lt;br /&gt;
However, SETI has a centralized server, which can go down and when it does, it uses exponential back off to push back the clients and ask them to wait before sending the result again. But, whenever a server comes back up many clients may try to access the server at once and may crash the server once again--essentially, the server will have manufactured its own DDOS attack due to the server&#039;s own inadequacies.The Exponential back-off approach is similar to the one adopted in resolving the TCP congestion.  It can be noted that there is almost no reliability engineering here, though. These are just standard servers running with one backup that is manually failed over to. This can give an idea of how asymmetric the relationship is. &lt;br /&gt;
&lt;br /&gt;
One reason that this might be is to look at the actual service and who is running it. Reliability for a service is high when a high amount of people use the service and, hence, would be upset were the service to go down. In this case, it&#039;s the university using the service and clients are helping out by providing resources. If the service goes down, it is the university&#039;s fault and they can individually deal with it. It is interesting to compare this strategy to highly reliable systems like Ceph or Oceanstore, which could recover the data in case a node crashes.&lt;br /&gt;
&lt;br /&gt;
The idea of redundancy relates to Oceanstore a little, but how would Oceanstore map onto this idea of public resource computing? In place of the Oceanstore metadata cluster, there is a central server. In place of the data store, there are machines doing computation. Specifically, mapping on this model of public resource computing is the notion of having one central thing and a bunch of outlying nodes. This is very much a master/slave relationship, though it is a voluntary one. In this relationship, CPU cycles are cheap, but bandwidth is expensive, hence showing why work units are sent infrequently. The storage is in-between--sometimes data is pushed to the clients. When this is done, the resemblance of public resource computing to Oceanstore is stronger.&lt;br /&gt;
&lt;br /&gt;
=== Embarrassingly Parallell ===&lt;br /&gt;
&lt;br /&gt;
SETI is an example of &amp;quot;embarrassingly parallel&amp;quot; workload where the problem has inherent nature to lend itself to be divided into work-units and be computed in-parallel without any need to consolidate the results, it is called &amp;quot;embarrassingly parallel&amp;quot; because there is little to no efforts required to distribute the work load in parallel and you don&#039;t get much praise for doing it. one more example of &amp;quot;embarrassingly parallel&amp;quot; from the file systems that we have covered so far could be web-indexing in GFS. Any file system that we have discussed so far, which doesnt trust the clients can be modeled to work as public sharing system.&lt;br /&gt;
&lt;br /&gt;
Note: Public resource computing is also very similar to mapreduce, which we will be discussing later in the course. Make sure to keep public resource computing in mind when we reach this.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18806</id>
		<title>DistOS 2014W Lecture 16</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18806"/>
		<updated>2014-03-13T02:40:50Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Outline for upcoming lectures */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Public Resource Computing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Outline for upcoming lectures ==&lt;br /&gt;
&lt;br /&gt;
All the papers that would be covered in upcoming lectures have been posted on Wiki. These papers will be more difficult in comparison to the papers we have covered so far, so we should be prepared to allot more time for studying these papers and come prepared in class. We may abandon the way of discussing the papers in group and instead everyone would ask the questions about what,they did not understand from paper so it would allow us to discuss the technical detail better.&lt;br /&gt;
Professor will not be taking the next class, instead our TA would discuss the two papers on how to conduct a literature survey, which should help with our projects. &lt;br /&gt;
The rest of the papers will deal with many closely related systems. In particular, we will be looking at distributed hash tables and systems that use distributed hash tables.&lt;br /&gt;
&lt;br /&gt;
After looking at the material from today, we will also be looking at how we can get the kind of distribution that we get with public resource computing, but with greater flexibility.&lt;br /&gt;
&lt;br /&gt;
== Project proposal==&lt;br /&gt;
There were 11 proposals and out of which professor found 4 to be in the state of getting accepted and has graded them 10/10. professor has mailed to everyone with the feedback about the project proposal so that we can incorporate those comments and submit the project proposals by coming Saturday ( the extended deadline). the deadline has been extended so that every one can work out the flaws in their proposal and get the best grades (10/10).&lt;br /&gt;
Project Presentation are to be held on 1st and 3rd april. People who got 10/10 should be ready to present on Tuesday as they are ahead and better prepared for it, there should be 6 presentation on Tuesday and rest on Thursday.   &lt;br /&gt;
Under-grad will have their final exam on 24th April. 24th April is also the date to turn-in the final project report.&lt;br /&gt;
&lt;br /&gt;
== Public Resource Computing  ==&lt;br /&gt;
&lt;br /&gt;
The paper assigned for readings were on SETI and BOINC. BOINC is the system SETI is built upon, there are other projects running on the same system like Folding@home etc. In particular, we want to discuss the following:&lt;br /&gt;
What is public resource computing? How does public resource computing relate to the various computational models and systems that we have seen this semester? How are they similar in design, purpose, and technologies? How is it different?&lt;br /&gt;
 &lt;br /&gt;
The main purpose of public resource computing was to have a universally accessible, easy-to-use, way of sharing resources. This is interesting as it differs from some of the systems we have looked at that deal with the sharing of information rather than resources. &lt;br /&gt;
&lt;br /&gt;
For computational parallelism, you need a highly parallel problem. SETI@home and folding@home give examples of such problems. In public resource computing, particularly with the BOINC system, you divide the problem into work units. People voluntarily install the clients on their machines, running the program to work on work units that are sent to their clients in return for credits. In the past, it has been institutes, such as universities, running services with other people connecting in to use said service. Public resource computing turns this use case on its head, having the institute (e.g., the university) being the one using the service while other people are contributing to said service voluntarily. In the files systems we have covered so far, people would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine. Since they are contributing voluntarily, how do you make these users care about the system if something were to happen? The gamification of the system causes many users to become invested in the system. People are doing work for credits and those with the most credits are showcased as major contributors. They can also see the amount of resources (e.g., process cycles) they have devoted to the cause on the GUI of the installed client. When the client produces results for the work unit it was processing, it sends the result to the server.&lt;br /&gt;
&lt;br /&gt;
For fault tolerance, such as malicious clients are faulty processors, redundant computing is done. Work units are processed multiple times.&lt;br /&gt;
Work units are later taken off of the clients as dictated by the following two cases:&lt;br /&gt;
# They receive the number of results, &#039;&#039;&#039;n&#039;&#039;&#039;, for a certain work unit, they take the answer that the majority gave.&lt;br /&gt;
# They have transmitted a work unit &#039;&#039;&#039;m&#039;&#039;&#039; times and have not gotten back the n expected responses. &lt;br /&gt;
It should be noted that, in doing this, it is possible that some work units are never processed. The probability of this happening can be reduced by increasing the value of &#039;&#039;&#039;m&#039;&#039;&#039;, though.&lt;br /&gt;
&lt;br /&gt;
=== General Discussion ===&lt;br /&gt;
So, given all this, how would we generally define public resource computing/public interest computing? It is essentially using the public as a resource--you are voluntarily giving up your extra compute cycles for projects (this is a little like donating blood--public resource computing is a vampire). Looking at public resource computing like this, we can contrast it with a botnet. What is the difference? Both system are utilizing client machines to perform/aid in some task. The answer: consent. You are consensually contributing to a project rather than being (unknowingly) forced to. Other differences are the ends/resources that you want as well as reliability. With a botnet, you can trust that a higher proportion of your users are following your commands exactly (as they have no idea they are performing them). Whereas, in public resource computing, how can you guarantee that clients are doing what you want?&lt;br /&gt;
  &lt;br /&gt;
=== Comparisons ===&lt;br /&gt;
Basic Comparison with other File systems , we have covered so far -&lt;br /&gt;
&lt;br /&gt;
# Use-Cases have been turned on their head. In the files systems we have covered so far, People would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine.&lt;br /&gt;
# In other file systems it was about many clients sharing the data, here it is more about sharing the processing power. In Folding@home, the system can store some of its data at client&#039;s storage but that is not the public resource computing&#039;s main focus.&lt;br /&gt;
# It is nothing like systems like OceanStore where there is no centralized authority, in BOINC the master/slave relation between the centralized server and the clients installed across users&#039; machine can still be visualized and it is more like GFS in that sense because GFS also had a centralized metadata server.&lt;br /&gt;
# Public resource systems are like BOTNETs but people install these clients with consent and there is no need for communication between the clients ( it is not peer to peer network). It could be made to communicate at peer to peer level but it would risk security as clients are not trusted in the network&lt;br /&gt;
&lt;br /&gt;
=== Trust Model and Fault Tolerance ===&lt;br /&gt;
&lt;br /&gt;
In this central model, you have a central resource and distribute work to clients who proess the work and send back results. Once they do, you can send them more work. In this model, can you trust the client to complete the computation successfully? The answer is not necessarily--there could be untrustworthy clients sending back rubbish answers.&lt;br /&gt;
&lt;br /&gt;
So, how does SETI address the questions of fault tolerance ? They use replication for reliability and redundant computing. Work units are assigned to multiple clients and the results that are returned to server can be analyzed to find the outliers in order to detect the malicious users but that addresses the situations of fault tolerance from client perspective. &lt;br /&gt;
&lt;br /&gt;
However, SETI has a centralized server, which can go down and when it does, it uses exponential back off to push back the clients and ask them to wait before sending the result again. But, whenever a server comes back up many clients may try to access the server at once and may crash the server once again--essentially, the server will have manufactured its own DDOS attack due to the server&#039;s own inadequacies.The Exponential back-off approach is similar to the one adopted in resolving the TCP congestion.  It can be noted that there is almost no reliability engineering here, though. These are just standard servers running with one backup that is manually failed over to. This can give an idea of how asymmetric the relationship is. &lt;br /&gt;
&lt;br /&gt;
One reason that this might be is to look at the actual service and who is running it. Reliability for a service is high when a high amount of people use the service and, hence, would be upset were the service to go down. In this case, it&#039;s the university using the service and clients are helping out by providing resources. If the service goes down, it is the university&#039;s fault and they can individually deal with it. &lt;br /&gt;
&lt;br /&gt;
The idea of redundancy relates to Oceanstore a little, but how would Oceanstore map onto this idea of public resource computing? In place of the Oceanstore metadata cluster, there is a central server. In place of the data store, there are machines doing computation. Specifically, mapping on this model of public resource computing is the notion of having one central thing and a bunch of outlying nodes. This is very much a master/slave relationship, though it is a voluntary one. In this relationship, CPU cycles are cheap, but bandwidth is expensive, hence showing why work units are sent infrequently. The storage is in-between--sometimes data is pushed to the clients. When this is done, the resemblance of public resource computing to Oceanstore is stronger.&lt;br /&gt;
&lt;br /&gt;
public resource computing don&#039;t need to be very highly reliable because it is used by scientists/ researchers who can bring the system back up, in case it goes down and start again. There are however few measures discussed within SETI like read-only data back-up etc . Compare this to highly reliable systems like ceph or oceanstore , which could recover the data in case of node crashes. &lt;br /&gt;
     &lt;br /&gt;
Skype was modelled much like a public resource computing network (before Microsoft took over) as the network would choose super nodes to act as routers. These super nodes would be the machines with higher reliability and better processing powers. After MS&#039; takeover the supernodes have been centralized and the election of supernodes functionality has been removed from the system.&lt;br /&gt;
&lt;br /&gt;
=== Embarrassingly Parallell ===&lt;br /&gt;
&lt;br /&gt;
SETI is an example of &amp;quot;embarrassingly parallel&amp;quot; workload where the problem has inherent nature to lend itself to be divided into work-units and be computed in-parallel without any need to consolidate the results, it is called &amp;quot;embarrassingly parallel&amp;quot; because there is little to no efforts required to distribute the work load in parallel and you don&#039;t get much praise for doing it. one more example of &amp;quot;embarrassingly parallel&amp;quot; from the file systems that we have covered so far could be web-indexing in GFS. Any file system that we have discussed so far, which doesnt trust the clients can be modeled to work as public sharing system.&lt;br /&gt;
&lt;br /&gt;
Note: Public resource computing is also very similar to mapreduce, which we will be discussing later in the course. Make sure to keep public resource computing in mind when we reach this.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18805</id>
		<title>DistOS 2014W Lecture 16</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18805"/>
		<updated>2014-03-13T02:39:53Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Trust Model and Fault Tolerance */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Public Resource Computing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Outline for upcoming lectures ==&lt;br /&gt;
&lt;br /&gt;
All the papers that would be covered in upcoming lectures have been posted on Wiki. These papers will be more difficult in comparison to the papers we have covered so far, so we should be prepared to allot more time for studying these papers and come prepared in class. We may abandon the way of discussing the papers in group and instead everyone would ask the questions about what,they did not understand from paper so it would allow us to discuss the technical detail better.&lt;br /&gt;
Professor will not be taking the next class, instead our TA would discuss the two papers on how to conduct a literature survey, which should help with our projects. &lt;br /&gt;
The rest of the papers will deal with many closely related systems. In particular, we will be looking at distributed hash tables and systems that use distributed hash tables. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Project proposal==&lt;br /&gt;
There were 11 proposals and out of which professor found 4 to be in the state of getting accepted and has graded them 10/10. professor has mailed to everyone with the feedback about the project proposal so that we can incorporate those comments and submit the project proposals by coming Saturday ( the extended deadline). the deadline has been extended so that every one can work out the flaws in their proposal and get the best grades (10/10).&lt;br /&gt;
Project Presentation are to be held on 1st and 3rd april. People who got 10/10 should be ready to present on Tuesday as they are ahead and better prepared for it, there should be 6 presentation on Tuesday and rest on Thursday.   &lt;br /&gt;
Under-grad will have their final exam on 24th April. 24th April is also the date to turn-in the final project report.&lt;br /&gt;
&lt;br /&gt;
== Public Resource Computing  ==&lt;br /&gt;
&lt;br /&gt;
The paper assigned for readings were on SETI and BOINC. BOINC is the system SETI is built upon, there are other projects running on the same system like Folding@home etc. In particular, we want to discuss the following:&lt;br /&gt;
What is public resource computing? How does public resource computing relate to the various computational models and systems that we have seen this semester? How are they similar in design, purpose, and technologies? How is it different?&lt;br /&gt;
 &lt;br /&gt;
The main purpose of public resource computing was to have a universally accessible, easy-to-use, way of sharing resources. This is interesting as it differs from some of the systems we have looked at that deal with the sharing of information rather than resources. &lt;br /&gt;
&lt;br /&gt;
For computational parallelism, you need a highly parallel problem. SETI@home and folding@home give examples of such problems. In public resource computing, particularly with the BOINC system, you divide the problem into work units. People voluntarily install the clients on their machines, running the program to work on work units that are sent to their clients in return for credits. In the past, it has been institutes, such as universities, running services with other people connecting in to use said service. Public resource computing turns this use case on its head, having the institute (e.g., the university) being the one using the service while other people are contributing to said service voluntarily. In the files systems we have covered so far, people would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine. Since they are contributing voluntarily, how do you make these users care about the system if something were to happen? The gamification of the system causes many users to become invested in the system. People are doing work for credits and those with the most credits are showcased as major contributors. They can also see the amount of resources (e.g., process cycles) they have devoted to the cause on the GUI of the installed client. When the client produces results for the work unit it was processing, it sends the result to the server.&lt;br /&gt;
&lt;br /&gt;
For fault tolerance, such as malicious clients are faulty processors, redundant computing is done. Work units are processed multiple times.&lt;br /&gt;
Work units are later taken off of the clients as dictated by the following two cases:&lt;br /&gt;
# They receive the number of results, &#039;&#039;&#039;n&#039;&#039;&#039;, for a certain work unit, they take the answer that the majority gave.&lt;br /&gt;
# They have transmitted a work unit &#039;&#039;&#039;m&#039;&#039;&#039; times and have not gotten back the n expected responses. &lt;br /&gt;
It should be noted that, in doing this, it is possible that some work units are never processed. The probability of this happening can be reduced by increasing the value of &#039;&#039;&#039;m&#039;&#039;&#039;, though.&lt;br /&gt;
&lt;br /&gt;
=== General Discussion ===&lt;br /&gt;
So, given all this, how would we generally define public resource computing/public interest computing? It is essentially using the public as a resource--you are voluntarily giving up your extra compute cycles for projects (this is a little like donating blood--public resource computing is a vampire). Looking at public resource computing like this, we can contrast it with a botnet. What is the difference? Both system are utilizing client machines to perform/aid in some task. The answer: consent. You are consensually contributing to a project rather than being (unknowingly) forced to. Other differences are the ends/resources that you want as well as reliability. With a botnet, you can trust that a higher proportion of your users are following your commands exactly (as they have no idea they are performing them). Whereas, in public resource computing, how can you guarantee that clients are doing what you want?&lt;br /&gt;
  &lt;br /&gt;
=== Comparisons ===&lt;br /&gt;
Basic Comparison with other File systems , we have covered so far -&lt;br /&gt;
&lt;br /&gt;
# Use-Cases have been turned on their head. In the files systems we have covered so far, People would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine.&lt;br /&gt;
# In other file systems it was about many clients sharing the data, here it is more about sharing the processing power. In Folding@home, the system can store some of its data at client&#039;s storage but that is not the public resource computing&#039;s main focus.&lt;br /&gt;
# It is nothing like systems like OceanStore where there is no centralized authority, in BOINC the master/slave relation between the centralized server and the clients installed across users&#039; machine can still be visualized and it is more like GFS in that sense because GFS also had a centralized metadata server.&lt;br /&gt;
# Public resource systems are like BOTNETs but people install these clients with consent and there is no need for communication between the clients ( it is not peer to peer network). It could be made to communicate at peer to peer level but it would risk security as clients are not trusted in the network&lt;br /&gt;
&lt;br /&gt;
=== Trust Model and Fault Tolerance ===&lt;br /&gt;
&lt;br /&gt;
In this central model, you have a central resource and distribute work to clients who proess the work and send back results. Once they do, you can send them more work. In this model, can you trust the client to complete the computation successfully? The answer is not necessarily--there could be untrustworthy clients sending back rubbish answers.&lt;br /&gt;
&lt;br /&gt;
So, how does SETI address the questions of fault tolerance ? They use replication for reliability and redundant computing. Work units are assigned to multiple clients and the results that are returned to server can be analyzed to find the outliers in order to detect the malicious users but that addresses the situations of fault tolerance from client perspective. &lt;br /&gt;
&lt;br /&gt;
However, SETI has a centralized server, which can go down and when it does, it uses exponential back off to push back the clients and ask them to wait before sending the result again. But, whenever a server comes back up many clients may try to access the server at once and may crash the server once again--essentially, the server will have manufactured its own DDOS attack due to the server&#039;s own inadequacies.The Exponential back-off approach is similar to the one adopted in resolving the TCP congestion.  It can be noted that there is almost no reliability engineering here, though. These are just standard servers running with one backup that is manually failed over to. This can give an idea of how asymmetric the relationship is. &lt;br /&gt;
&lt;br /&gt;
One reason that this might be is to look at the actual service and who is running it. Reliability for a service is high when a high amount of people use the service and, hence, would be upset were the service to go down. In this case, it&#039;s the university using the service and clients are helping out by providing resources. If the service goes down, it is the university&#039;s fault and they can individually deal with it. &lt;br /&gt;
&lt;br /&gt;
The idea of redundancy relates to Oceanstore a little, but how would Oceanstore map onto this idea of public resource computing? In place of the Oceanstore metadata cluster, there is a central server. In place of the data store, there are machines doing computation. Specifically, mapping on this model of public resource computing is the notion of having one central thing and a bunch of outlying nodes. This is very much a master/slave relationship, though it is a voluntary one. In this relationship, CPU cycles are cheap, but bandwidth is expensive, hence showing why work units are sent infrequently. The storage is in-between--sometimes data is pushed to the clients. When this is done, the resemblance of public resource computing to Oceanstore is stronger.&lt;br /&gt;
&lt;br /&gt;
public resource computing don&#039;t need to be very highly reliable because it is used by scientists/ researchers who can bring the system back up, in case it goes down and start again. There are however few measures discussed within SETI like read-only data back-up etc . Compare this to highly reliable systems like ceph or oceanstore , which could recover the data in case of node crashes. &lt;br /&gt;
     &lt;br /&gt;
Skype was modelled much like a public resource computing network (before Microsoft took over) as the network would choose super nodes to act as routers. These super nodes would be the machines with higher reliability and better processing powers. After MS&#039; takeover the supernodes have been centralized and the election of supernodes functionality has been removed from the system.&lt;br /&gt;
&lt;br /&gt;
=== Embarrassingly Parallell ===&lt;br /&gt;
&lt;br /&gt;
SETI is an example of &amp;quot;embarrassingly parallel&amp;quot; workload where the problem has inherent nature to lend itself to be divided into work-units and be computed in-parallel without any need to consolidate the results, it is called &amp;quot;embarrassingly parallel&amp;quot; because there is little to no efforts required to distribute the work load in parallel and you don&#039;t get much praise for doing it. one more example of &amp;quot;embarrassingly parallel&amp;quot; from the file systems that we have covered so far could be web-indexing in GFS. Any file system that we have discussed so far, which doesnt trust the clients can be modeled to work as public sharing system.&lt;br /&gt;
&lt;br /&gt;
Note: Public resource computing is also very similar to mapreduce, which we will be discussing later in the course. Make sure to keep public resource computing in mind when we reach this.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18804</id>
		<title>DistOS 2014W Lecture 16</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18804"/>
		<updated>2014-03-12T19:37:30Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Trust Model and Fault Tolerance */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Public Resource Computing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Outline for upcoming lectures ==&lt;br /&gt;
&lt;br /&gt;
All the papers that would be covered in upcoming lectures have been posted on Wiki. These papers will be more difficult in comparison to the papers we have covered so far, so we should be prepared to allot more time for studying these papers and come prepared in class. We may abandon the way of discussing the papers in group and instead everyone would ask the questions about what,they did not understand from paper so it would allow us to discuss the technical detail better.&lt;br /&gt;
Professor will not be taking the next class, instead our TA would discuss the two papers on how to conduct a literature survey, which should help with our projects. &lt;br /&gt;
The rest of the papers will deal with many closely related systems. In particular, we will be looking at distributed hash tables and systems that use distributed hash tables. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Project proposal==&lt;br /&gt;
There were 11 proposals and out of which professor found 4 to be in the state of getting accepted and has graded them 10/10. professor has mailed to everyone with the feedback about the project proposal so that we can incorporate those comments and submit the project proposals by coming Saturday ( the extended deadline). the deadline has been extended so that every one can work out the flaws in their proposal and get the best grades (10/10).&lt;br /&gt;
Project Presentation are to be held on 1st and 3rd april. People who got 10/10 should be ready to present on Tuesday as they are ahead and better prepared for it, there should be 6 presentation on Tuesday and rest on Thursday.   &lt;br /&gt;
Under-grad will have their final exam on 24th April. 24th April is also the date to turn-in the final project report.&lt;br /&gt;
&lt;br /&gt;
== Public Resource Computing  ==&lt;br /&gt;
&lt;br /&gt;
The paper assigned for readings were on SETI and BOINC. BOINC is the system SETI is built upon, there are other projects running on the same system like Folding@home etc. In particular, we want to discuss the following:&lt;br /&gt;
What is public resource computing? How does public resource computing relate to the various computational models and systems that we have seen this semester? How are they similar in design, purpose, and technologies? How is it different?&lt;br /&gt;
 &lt;br /&gt;
The main purpose of public resource computing was to have a universally accessible, easy-to-use, way of sharing resources. This is interesting as it differs from some of the systems we have looked at that deal with the sharing of information rather than resources. &lt;br /&gt;
&lt;br /&gt;
For computational parallelism, you need a highly parallel problem. SETI@home and folding@home give examples of such problems. In public resource computing, particularly with the BOINC system, you divide the problem into work units. People voluntarily install the clients on their machines, running the program to work on work units that are sent to their clients in return for credits. In the past, it has been institutes, such as universities, running services with other people connecting in to use said service. Public resource computing turns this use case on its head, having the institute (e.g., the university) being the one using the service while other people are contributing to said service voluntarily. In the files systems we have covered so far, people would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine. Since they are contributing voluntarily, how do you make these users care about the system if something were to happen? The gamification of the system causes many users to become invested in the system. People are doing work for credits and those with the most credits are showcased as major contributors. They can also see the amount of resources (e.g., process cycles) they have devoted to the cause on the GUI of the installed client. When the client produces results for the work unit it was processing, it sends the result to the server.&lt;br /&gt;
&lt;br /&gt;
For fault tolerance, such as malicious clients are faulty processors, redundant computing is done. Work units are processed multiple times.&lt;br /&gt;
Work units are later taken off of the clients as dictated by the following two cases:&lt;br /&gt;
# They receive the number of results, &#039;&#039;&#039;n&#039;&#039;&#039;, for a certain work unit, they take the answer that the majority gave.&lt;br /&gt;
# They have transmitted a work unit &#039;&#039;&#039;m&#039;&#039;&#039; times and have not gotten back the n expected responses. &lt;br /&gt;
It should be noted that, in doing this, it is possible that some work units are never processed. The probability of this happening can be reduced by increasing the value of &#039;&#039;&#039;m&#039;&#039;&#039;, though.&lt;br /&gt;
&lt;br /&gt;
=== General Discussion ===&lt;br /&gt;
So, given all this, how would we generally define public resource computing/public interest computing? It is essentially using the public as a resource--you are voluntarily giving up your extra compute cycles for projects (this is a little like donating blood--public resource computing is a vampire). Looking at public resource computing like this, we can contrast it with a botnet. What is the difference? Both system are utilizing client machines to perform/aid in some task. The answer: consent. You are consensually contributing to a project rather than being (unknowingly) forced to. Other differences are the ends/resources that you want as well as reliability. With a botnet, you can trust that a higher proportion of your users are following your commands exactly (as they have no idea they are performing them). Whereas, in public resource computing, how can you guarantee that clients are doing what you want?&lt;br /&gt;
  &lt;br /&gt;
=== Comparisons ===&lt;br /&gt;
Basic Comparison with other File systems , we have covered so far -&lt;br /&gt;
&lt;br /&gt;
# Use-Cases have been turned on their head. In the files systems we have covered so far, People would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine.&lt;br /&gt;
# In other file systems it was about many clients sharing the data, here it is more about sharing the processing power. In Folding@home, the system can store some of its data at client&#039;s storage but that is not the public resource computing&#039;s main focus.&lt;br /&gt;
# It is nothing like systems like OceanStore where there is no centralized authority, in BOINC the master/slave relation between the centralized server and the clients installed across users&#039; machine can still be visualized and it is more like GFS in that sense because GFS also had a centralized metadata server.&lt;br /&gt;
# Public resource systems are like BOTNETs but people install these clients with consent and there is no need for communication between the clients ( it is not peer to peer network). It could be made to communicate at peer to peer level but it would risk security as clients are not trusted in the network&lt;br /&gt;
&lt;br /&gt;
=== Trust Model and Fault Tolerance ===&lt;br /&gt;
&lt;br /&gt;
Reliability  - How does SETI address the questions of fault tolerance ? They use replication for reliability, work units are assigned to multiple clients and the results that are returned to server can be analyzed to find the outliers in order to detect the malicious users but that addresses the situations of fault tolerance from client perspective. SETI has a centralized server, which can go down and when it does, it uses exponential back off mechanism to push back the clients and ask them to wait before sending the result again but whenever a server comes back up many clients may try to access the server at once and may crash the server once again, this may cause the ddos manufactured by the  server&#039;s own inadequacies.The Exponential backup approach is similar to the one adopted in resolving the TCP congestion.  &lt;br /&gt;
public resource computing don&#039;t need to be very highly reliable because it is used by scientists/ researchers who can bring the system back up, in case it goes down and start again. There are however few measures discussed within SETI like read-only data back-up etc . Compare this to highly reliable systems like ceph or oceanstore , which could recover the data in case of node crashes. &lt;br /&gt;
     &lt;br /&gt;
Skype was modelled much like a public resource computing network (before Microsoft took over) as the network would choose super nodes to act as routers. These super nodes would be the machines with higher reliability and better processing powers. After MS&#039; takeover the supernodes have been centralized and the election of supernodes functionality has been removed from the system.&lt;br /&gt;
&lt;br /&gt;
=== Embarrassingly Parallell ===&lt;br /&gt;
&lt;br /&gt;
SETI is an example of &amp;quot;embarrassingly parallel&amp;quot; workload where the problem has inherent nature to lend itself to be divided into work-units and be computed in-parallel without any need to consolidate the results, it is called &amp;quot;embarrassingly parallel&amp;quot; because there is little to no efforts required to distribute the work load in parallel and you don&#039;t get much praise for doing it. one more example of &amp;quot;embarrassingly parallel&amp;quot; from the file systems that we have covered so far could be web-indexing in GFS. Any file system that we have discussed so far, which doesnt trust the clients can be modeled to work as public sharing system.&lt;br /&gt;
&lt;br /&gt;
Note: Public resource computing is also very similar to mapreduce, which we will be discussing later in the course. Make sure to keep public resource computing in mind when we reach this.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18803</id>
		<title>DistOS 2014W Lecture 16</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18803"/>
		<updated>2014-03-12T19:37:02Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Comparisons */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Public Resource Computing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Outline for upcoming lectures ==&lt;br /&gt;
&lt;br /&gt;
All the papers that would be covered in upcoming lectures have been posted on Wiki. These papers will be more difficult in comparison to the papers we have covered so far, so we should be prepared to allot more time for studying these papers and come prepared in class. We may abandon the way of discussing the papers in group and instead everyone would ask the questions about what,they did not understand from paper so it would allow us to discuss the technical detail better.&lt;br /&gt;
Professor will not be taking the next class, instead our TA would discuss the two papers on how to conduct a literature survey, which should help with our projects. &lt;br /&gt;
The rest of the papers will deal with many closely related systems. In particular, we will be looking at distributed hash tables and systems that use distributed hash tables. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Project proposal==&lt;br /&gt;
There were 11 proposals and out of which professor found 4 to be in the state of getting accepted and has graded them 10/10. professor has mailed to everyone with the feedback about the project proposal so that we can incorporate those comments and submit the project proposals by coming Saturday ( the extended deadline). the deadline has been extended so that every one can work out the flaws in their proposal and get the best grades (10/10).&lt;br /&gt;
Project Presentation are to be held on 1st and 3rd april. People who got 10/10 should be ready to present on Tuesday as they are ahead and better prepared for it, there should be 6 presentation on Tuesday and rest on Thursday.   &lt;br /&gt;
Under-grad will have their final exam on 24th April. 24th April is also the date to turn-in the final project report.&lt;br /&gt;
&lt;br /&gt;
== Public Resource Computing  ==&lt;br /&gt;
&lt;br /&gt;
The paper assigned for readings were on SETI and BOINC. BOINC is the system SETI is built upon, there are other projects running on the same system like Folding@home etc. In particular, we want to discuss the following:&lt;br /&gt;
What is public resource computing? How does public resource computing relate to the various computational models and systems that we have seen this semester? How are they similar in design, purpose, and technologies? How is it different?&lt;br /&gt;
 &lt;br /&gt;
The main purpose of public resource computing was to have a universally accessible, easy-to-use, way of sharing resources. This is interesting as it differs from some of the systems we have looked at that deal with the sharing of information rather than resources. &lt;br /&gt;
&lt;br /&gt;
For computational parallelism, you need a highly parallel problem. SETI@home and folding@home give examples of such problems. In public resource computing, particularly with the BOINC system, you divide the problem into work units. People voluntarily install the clients on their machines, running the program to work on work units that are sent to their clients in return for credits. In the past, it has been institutes, such as universities, running services with other people connecting in to use said service. Public resource computing turns this use case on its head, having the institute (e.g., the university) being the one using the service while other people are contributing to said service voluntarily. In the files systems we have covered so far, people would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine. Since they are contributing voluntarily, how do you make these users care about the system if something were to happen? The gamification of the system causes many users to become invested in the system. People are doing work for credits and those with the most credits are showcased as major contributors. They can also see the amount of resources (e.g., process cycles) they have devoted to the cause on the GUI of the installed client. When the client produces results for the work unit it was processing, it sends the result to the server.&lt;br /&gt;
&lt;br /&gt;
For fault tolerance, such as malicious clients are faulty processors, redundant computing is done. Work units are processed multiple times.&lt;br /&gt;
Work units are later taken off of the clients as dictated by the following two cases:&lt;br /&gt;
# They receive the number of results, &#039;&#039;&#039;n&#039;&#039;&#039;, for a certain work unit, they take the answer that the majority gave.&lt;br /&gt;
# They have transmitted a work unit &#039;&#039;&#039;m&#039;&#039;&#039; times and have not gotten back the n expected responses. &lt;br /&gt;
It should be noted that, in doing this, it is possible that some work units are never processed. The probability of this happening can be reduced by increasing the value of &#039;&#039;&#039;m&#039;&#039;&#039;, though.&lt;br /&gt;
&lt;br /&gt;
=== General Discussion ===&lt;br /&gt;
So, given all this, how would we generally define public resource computing/public interest computing? It is essentially using the public as a resource--you are voluntarily giving up your extra compute cycles for projects (this is a little like donating blood--public resource computing is a vampire). Looking at public resource computing like this, we can contrast it with a botnet. What is the difference? Both system are utilizing client machines to perform/aid in some task. The answer: consent. You are consensually contributing to a project rather than being (unknowingly) forced to. Other differences are the ends/resources that you want as well as reliability. With a botnet, you can trust that a higher proportion of your users are following your commands exactly (as they have no idea they are performing them). Whereas, in public resource computing, how can you guarantee that clients are doing what you want?&lt;br /&gt;
  &lt;br /&gt;
=== Comparisons ===&lt;br /&gt;
Basic Comparison with other File systems , we have covered so far -&lt;br /&gt;
&lt;br /&gt;
# Use-Cases have been turned on their head. In the files systems we have covered so far, People would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine.&lt;br /&gt;
# In other file systems it was about many clients sharing the data, here it is more about sharing the processing power. In Folding@home, the system can store some of its data at client&#039;s storage but that is not the public resource computing&#039;s main focus.&lt;br /&gt;
# It is nothing like systems like OceanStore where there is no centralized authority, in BOINC the master/slave relation between the centralized server and the clients installed across users&#039; machine can still be visualized and it is more like GFS in that sense because GFS also had a centralized metadata server.&lt;br /&gt;
# Public resource systems are like BOTNETs but people install these clients with consent and there is no need for communication between the clients ( it is not peer to peer network). It could be made to communicate at peer to peer level but it would risk security as clients are not trusted in the network&lt;br /&gt;
&lt;br /&gt;
=== Trust Model and Fault Tolerance ===&lt;br /&gt;
&lt;br /&gt;
Reliability  - How does SETI address the questions of fault tolerance ? They use replication for reliability, work units are assigned to multiple clients and the results that are returned to server can be analyzed to find the outliers in order to detect the malicious users but that addresses the situations of fault tolerance from client perspective. SETI has a centralized server, which can go down and when it does, it uses exponential back off mechanism to push back the clients and ask them to wait before sending the result again but whenever a server comes back up many clients may try to access the server at once and may crash the server once again, this may cause the ddos manufactured by the  server&#039;s own inadequacies.The Exponential backup approach is similar to the one adopted in resolving the TCP congestion.  &lt;br /&gt;
public resource computing don&#039;t need to be very highly reliable because it is used by scientists/ researchers who can bring the system back up, in case it goes down and start again. There are however few measures discussed within SETI like read-only data back-up etc . Compare this to highly reliable systems like ceph or oceanstore , which could recover the data in case of node crashes. &lt;br /&gt;
     &lt;br /&gt;
Skype was modelled much like a public resource computing network (before Microsoft took over) as the network would choose super nodes to act as routers. These super nodes would be the machines with higher reliability and better processing powers. After MS&#039; takeover the supernodes have been centralized and the election of supernodes functionality has been removed from the system.&lt;br /&gt;
&lt;br /&gt;
SETI is an example of &amp;quot;embarrassingly parallel&amp;quot; workload where the problem has inherent nature to lend itself to be divided into work-units and be computed in-parallel without any need to consolidate the results, it is called &amp;quot;embarrassingly parallel&amp;quot; because there is little to no efforts required to distribute the work load in parallel and you don&#039;t get much praise for doing it. one more example of &amp;quot;embarrassingly parallel&amp;quot; from the file systems that we have covered so far could be web-indexing in GFS. Any file system that we have discussed so far, which doesnt trust the clients can be modeled to work as public sharing system.&lt;br /&gt;
&lt;br /&gt;
Note: Public resource computing is also very similar to mapreduce, which we will be discussing later in the course. Make sure to keep public resource computing in mind when we reach this.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18802</id>
		<title>DistOS 2014W Lecture 16</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18802"/>
		<updated>2014-03-12T19:36:41Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Public Resource Computing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Public Resource Computing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Outline for upcoming lectures ==&lt;br /&gt;
&lt;br /&gt;
All the papers that would be covered in upcoming lectures have been posted on Wiki. These papers will be more difficult in comparison to the papers we have covered so far, so we should be prepared to allot more time for studying these papers and come prepared in class. We may abandon the way of discussing the papers in group and instead everyone would ask the questions about what,they did not understand from paper so it would allow us to discuss the technical detail better.&lt;br /&gt;
Professor will not be taking the next class, instead our TA would discuss the two papers on how to conduct a literature survey, which should help with our projects. &lt;br /&gt;
The rest of the papers will deal with many closely related systems. In particular, we will be looking at distributed hash tables and systems that use distributed hash tables. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Project proposal==&lt;br /&gt;
There were 11 proposals and out of which professor found 4 to be in the state of getting accepted and has graded them 10/10. professor has mailed to everyone with the feedback about the project proposal so that we can incorporate those comments and submit the project proposals by coming Saturday ( the extended deadline). the deadline has been extended so that every one can work out the flaws in their proposal and get the best grades (10/10).&lt;br /&gt;
Project Presentation are to be held on 1st and 3rd april. People who got 10/10 should be ready to present on Tuesday as they are ahead and better prepared for it, there should be 6 presentation on Tuesday and rest on Thursday.   &lt;br /&gt;
Under-grad will have their final exam on 24th April. 24th April is also the date to turn-in the final project report.&lt;br /&gt;
&lt;br /&gt;
== Public Resource Computing  ==&lt;br /&gt;
&lt;br /&gt;
The paper assigned for readings were on SETI and BOINC. BOINC is the system SETI is built upon, there are other projects running on the same system like Folding@home etc. In particular, we want to discuss the following:&lt;br /&gt;
What is public resource computing? How does public resource computing relate to the various computational models and systems that we have seen this semester? How are they similar in design, purpose, and technologies? How is it different?&lt;br /&gt;
 &lt;br /&gt;
The main purpose of public resource computing was to have a universally accessible, easy-to-use, way of sharing resources. This is interesting as it differs from some of the systems we have looked at that deal with the sharing of information rather than resources. &lt;br /&gt;
&lt;br /&gt;
For computational parallelism, you need a highly parallel problem. SETI@home and folding@home give examples of such problems. In public resource computing, particularly with the BOINC system, you divide the problem into work units. People voluntarily install the clients on their machines, running the program to work on work units that are sent to their clients in return for credits. In the past, it has been institutes, such as universities, running services with other people connecting in to use said service. Public resource computing turns this use case on its head, having the institute (e.g., the university) being the one using the service while other people are contributing to said service voluntarily. In the files systems we have covered so far, people would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine. Since they are contributing voluntarily, how do you make these users care about the system if something were to happen? The gamification of the system causes many users to become invested in the system. People are doing work for credits and those with the most credits are showcased as major contributors. They can also see the amount of resources (e.g., process cycles) they have devoted to the cause on the GUI of the installed client. When the client produces results for the work unit it was processing, it sends the result to the server.&lt;br /&gt;
&lt;br /&gt;
For fault tolerance, such as malicious clients are faulty processors, redundant computing is done. Work units are processed multiple times.&lt;br /&gt;
Work units are later taken off of the clients as dictated by the following two cases:&lt;br /&gt;
# They receive the number of results, &#039;&#039;&#039;n&#039;&#039;&#039;, for a certain work unit, they take the answer that the majority gave.&lt;br /&gt;
# They have transmitted a work unit &#039;&#039;&#039;m&#039;&#039;&#039; times and have not gotten back the n expected responses. &lt;br /&gt;
It should be noted that, in doing this, it is possible that some work units are never processed. The probability of this happening can be reduced by increasing the value of &#039;&#039;&#039;m&#039;&#039;&#039;, though.&lt;br /&gt;
&lt;br /&gt;
=== General Discussion ===&lt;br /&gt;
So, given all this, how would we generally define public resource computing/public interest computing? It is essentially using the public as a resource--you are voluntarily giving up your extra compute cycles for projects (this is a little like donating blood--public resource computing is a vampire). Looking at public resource computing like this, we can contrast it with a botnet. What is the difference? Both system are utilizing client machines to perform/aid in some task. The answer: consent. You are consensually contributing to a project rather than being (unknowingly) forced to. Other differences are the ends/resources that you want as well as reliability. With a botnet, you can trust that a higher proportion of your users are following your commands exactly (as they have no idea they are performing them). Whereas, in public resource computing, how can you guarantee that clients are doing what you want?&lt;br /&gt;
  &lt;br /&gt;
=== Comparisons ===&lt;br /&gt;
Basic Comparison with other File systems , we have covered so far -&lt;br /&gt;
&lt;br /&gt;
1) Use-Cases have been turned on their head. In the files systems we have covered so far, People would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine.&lt;br /&gt;
2) In other file systems it was about many clients sharing the data, here it is more about sharing the processing power. In Folding@home, the system can store some of its data at client&#039;s storage but that is not the public resource computing&#039;s main focus.&lt;br /&gt;
3) It is nothing like systems like OceanStore where there is no centralized authority, in BOINC the master/slave relation between the centralized server and the clients installed across users&#039; machine can still be visualized and it is more like GFS in that sense because GFS also had a centralized metadata server.&lt;br /&gt;
4) Public resource systems are like BOTNETs but people install these clients with consent and there is no need for communication between the clients ( it is not peer to peer network). It could be made to communicate at peer to peer level but it would risk security as clients are not trusted in the network&lt;br /&gt;
&lt;br /&gt;
=== Trust Model and Fault Tolerance ===&lt;br /&gt;
&lt;br /&gt;
Reliability  - How does SETI address the questions of fault tolerance ? They use replication for reliability, work units are assigned to multiple clients and the results that are returned to server can be analyzed to find the outliers in order to detect the malicious users but that addresses the situations of fault tolerance from client perspective. SETI has a centralized server, which can go down and when it does, it uses exponential back off mechanism to push back the clients and ask them to wait before sending the result again but whenever a server comes back up many clients may try to access the server at once and may crash the server once again, this may cause the ddos manufactured by the  server&#039;s own inadequacies.The Exponential backup approach is similar to the one adopted in resolving the TCP congestion.  &lt;br /&gt;
public resource computing don&#039;t need to be very highly reliable because it is used by scientists/ researchers who can bring the system back up, in case it goes down and start again. There are however few measures discussed within SETI like read-only data back-up etc . Compare this to highly reliable systems like ceph or oceanstore , which could recover the data in case of node crashes. &lt;br /&gt;
     &lt;br /&gt;
Skype was modelled much like a public resource computing network (before Microsoft took over) as the network would choose super nodes to act as routers. These super nodes would be the machines with higher reliability and better processing powers. After MS&#039; takeover the supernodes have been centralized and the election of supernodes functionality has been removed from the system.&lt;br /&gt;
&lt;br /&gt;
SETI is an example of &amp;quot;embarrassingly parallel&amp;quot; workload where the problem has inherent nature to lend itself to be divided into work-units and be computed in-parallel without any need to consolidate the results, it is called &amp;quot;embarrassingly parallel&amp;quot; because there is little to no efforts required to distribute the work load in parallel and you don&#039;t get much praise for doing it. one more example of &amp;quot;embarrassingly parallel&amp;quot; from the file systems that we have covered so far could be web-indexing in GFS. Any file system that we have discussed so far, which doesnt trust the clients can be modeled to work as public sharing system.&lt;br /&gt;
&lt;br /&gt;
Note: Public resource computing is also very similar to mapreduce, which we will be discussing later in the course. Make sure to keep public resource computing in mind when we reach this.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18801</id>
		<title>DistOS 2014W Lecture 16</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18801"/>
		<updated>2014-03-12T19:34:46Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Public Resource Computing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Outline for upcoming lectures ==&lt;br /&gt;
&lt;br /&gt;
All the papers that would be covered in upcoming lectures have been posted on Wiki. These papers will be more difficult in comparison to the papers we have covered so far, so we should be prepared to allot more time for studying these papers and come prepared in class. We may abandon the way of discussing the papers in group and instead everyone would ask the questions about what,they did not understand from paper so it would allow us to discuss the technical detail better.&lt;br /&gt;
Professor will not be taking the next class, instead our TA would discuss the two papers on how to conduct a literature survey, which should help with our projects. &lt;br /&gt;
The rest of the papers will deal with many closely related systems. In particular, we will be looking at distributed hash tables and systems that use distributed hash tables. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Project proposal==&lt;br /&gt;
There were 11 proposals and out of which professor found 4 to be in the state of getting accepted and has graded them 10/10. professor has mailed to everyone with the feedback about the project proposal so that we can incorporate those comments and submit the project proposals by coming Saturday ( the extended deadline). the deadline has been extended so that every one can work out the flaws in their proposal and get the best grades (10/10).&lt;br /&gt;
Project Presentation are to be held on 1st and 3rd april. People who got 10/10 should be ready to present on Tuesday as they are ahead and better prepared for it, there should be 6 presentation on Tuesday and rest on Thursday.   &lt;br /&gt;
Under-grad will have their final exam on 24th April. 24th April is also the date to turn-in the final project report.&lt;br /&gt;
&lt;br /&gt;
== Public Resource Computing  ==&lt;br /&gt;
&lt;br /&gt;
The paper assigned for readings were on SETI and BOINC. BOINC is the system SETI is built upon, there are other projects running on the same system like Folding@home etc. In particular, we want to discuss the following:&lt;br /&gt;
What is public resource computing? How does public resource computing relate to the various computational models and systems that we have seen this semester? How are they similar in design, purpose, and technologies? How is it different?&lt;br /&gt;
 &lt;br /&gt;
The main purpose of public resource computing was to have a universally accessible, easy-to-use, way of sharing resources. This is interesting as it differs from some of the systems we have looked at that deal with the sharing of information rather than resources. &lt;br /&gt;
&lt;br /&gt;
For computational parallelism, you need a highly parallel problem. SETI@home and folding@home give examples of such problems. In public resource computing, particularly with the BOINC system, you divide the problem into work units. People voluntarily install the clients on their machines, running the program to work on work units that are sent to their clients in return for credits. In the past, it has been institutes, such as universities, running services with other people connecting in to use said service. Public resource computing turns this use case on its head, having the institute (e.g., the university) being the one using the service while other people are contributing to said service voluntarily. In the files systems we have covered so far, people would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine. Since they are contributing voluntarily, how do you make these users care about the system if something were to happen? The gamification of the system causes many users to become invested in the system. People are doing work for credits and those with the most credits are showcased as major contributors. They can also see the amount of resources (e.g., process cycles) they have devoted to the cause on the GUI of the installed client. When the client produces results for the work unit it was processing, it sends the result to the server.&lt;br /&gt;
For fault tolerance, such as malicious clients are faulty processors, redundant computing is done. Work units are processed multiple times.&lt;br /&gt;
Work units are later taken off of the clients as dictated by the following two cases:&lt;br /&gt;
# They receive the number of results, &#039;&#039;&#039;n&#039;&#039;&#039;, for a certain work unit, they take the answer that the majority gave.&lt;br /&gt;
# They have transmitted a work unit &#039;&#039;&#039;m&#039;&#039;&#039; times and have not gotten back the n expected responses. &lt;br /&gt;
It should be noted that, in doing this, it is possible that some work units are never processed. The probability of this happening can be reduced by increasing the value of &#039;&#039;&#039;m&#039;&#039;&#039;, though.&lt;br /&gt;
&lt;br /&gt;
So, given all this, how would we generally define public resource computing/public interest computing? It is essentially using the public as a resource--you are voluntarily giving up your extra compute cycles for projects (this is a little like donating blood--public resource computing is a vampire). Looking at public resource computing like this, we can contrast it with a botnet. What is the difference? Both system are utilizing client machines to perform/aid in some task. The answer: consent. You are consensually contributing to a project rather than being (unknowingly) forced to. Other differences are the ends/resources that you want as well as reliability. With a botnet, you can trust that a higher proportion of your users are following your commands exactly (as they have no idea they are performing them). Whereas, in public resource computing, how can you guarantee that clients are doing what you want?&lt;br /&gt;
  &lt;br /&gt;
Basic Comparison with other File systems , we have covered so far -&lt;br /&gt;
&lt;br /&gt;
1) Use-Cases have been turned on their head. In the files systems we have covered so far, People would want access to the files stored in a network system, here a system wants to access people&#039;s machines to utilize the processing power of their machine.&lt;br /&gt;
2) In other file systems it was about many clients sharing the data, here it is more about sharing the processing power. In Folding@home, the system can store some of its data at client&#039;s storage but that is not the public resource computing&#039;s main focus.&lt;br /&gt;
3) It is nothing like systems like OceanStore where there is no centralized authority, in BOINC the master/slave relation between the centralized server and the clients installed across users&#039; machine can still be visualized and it is more like GFS in that sense because GFS also had a centralized metadata server.&lt;br /&gt;
4) Public resource systems are like BOTNETs but people install these clients with consent and there is no need for communication between the clients ( it is not peer to peer network). It could be made to communicate at peer to peer level but it would risk security as clients are not trusted in the network&lt;br /&gt;
&lt;br /&gt;
Reliability  - How does SETI address the questions of fault tolerance ? They use replication for reliability, work units are assigned to multiple clients and the results that are returned to server can be analyzed to find the outliers in order to detect the malicious users but that addresses the situations of fault tolerance from client perspective. SETI has a centralized server, which can go down and when it does, it uses exponential back off mechanism to push back the clients and ask them to wait before sending the result again but whenever a server comes back up many clients may try to access the server at once and may crash the server once again, this may cause the ddos manufactured by the  server&#039;s own inadequacies.The Exponential backup approach is similar to the one adopted in resolving the TCP congestion.  &lt;br /&gt;
public resource computing don&#039;t need to be very highly reliable because it is used by scientists/ researchers who can bring the system back up, in case it goes down and start again. There are however few measures discussed within SETI like read-only data back-up etc . Compare this to highly reliable systems like ceph or oceanstore , which could recover the data in case of node crashes. &lt;br /&gt;
     &lt;br /&gt;
Skype was modelled much like a public resource computing network (before Microsoft took over) as the network would choose super nodes to act as routers. These super nodes would be the machines with higher reliability and better processing powers. After MS&#039; takeover the supernodes have been centralized and the election of supernodes functionality has been removed from the system.&lt;br /&gt;
&lt;br /&gt;
SETI is an example of &amp;quot;embarrassingly parallel&amp;quot; workload where the problem has inherent nature to lend itself to be divided into work-units and be computed in-parallel without any need to consolidate the results, it is called &amp;quot;embarrassingly parallel&amp;quot; because there is little to no efforts required to distribute the work load in parallel and you don&#039;t get much praise for doing it. one more example of &amp;quot;embarrassingly parallel&amp;quot; from the file systems that we have covered so far could be web-indexing in GFS. Any file system that we have discussed so far, which doesnt trust the clients can be modeled to work as public sharing system.&lt;br /&gt;
&lt;br /&gt;
Note: Public resource computing is also very similar to mapreduce, which we will be discussing later in the course. Make sure to keep public resource computing in mind when we reach this.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_14&amp;diff=18766</id>
		<title>DistOS 2014W Lecture 14</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_14&amp;diff=18766"/>
		<updated>2014-03-09T15:39:27Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* How to read a research paper */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=OceanStore=&lt;br /&gt;
&lt;br /&gt;
==What is the dream?==&lt;br /&gt;
The dream was to create a persistent storage system that had high availability and was universally accessibly--a global, ubiquitous persistent data storage solution. OceanStore was meant to be utility managed by multiple parties, with no one party having total control/monopoly over the system. To support the goal of high availability, there was a high amount of redundancy and fault-tolerance. For high persistence, everything was archived--nothing was ever truly deleted. This can be likened to working in version control with &amp;quot;Commits&amp;quot;. This is possibly due to the realization that the easier it is to delete things, the easier it is to lose things.&lt;br /&gt;
&lt;br /&gt;
The basic assumption made by the designers of OceanStore, however, was that none of the servers could be trusted. To support this, the system held only opaque/encrypted data. As such, the system could be used for more than files (e.g., for whole databases). &lt;br /&gt;
&lt;br /&gt;
The system utilized nomadic data, meaning that data could be cached anywhere, unlike with NFS and AFS where only specific servers can cache the data.&lt;br /&gt;
&lt;br /&gt;
==Why did the dream die?==&lt;br /&gt;
&lt;br /&gt;
The biggest reason that caused the OceanStore dream to die was the assumption of mistrusting all the actors--everything else they did was right. This assumption, however, caused the system to become needlessly complicated as they had to rebuild &#039;&#039;everything&#039;&#039; to accommodate this assumption. This was also unrealistic as this is not an assumption that is generally made (i.e., it is normally assumed that at least some of the actors can be trusted). Other successful distributed systems are built on a more trusted model. In short, the solution that accommodates untrusted actors assumption is just too expensive.&lt;br /&gt;
&lt;br /&gt;
=== Technology ===&lt;br /&gt;
As outlined above, the trust model (read: fundamentally untrusted model) is the most attractive feature which ultimately killed it. The untrusted assumption introduced a huge burden on the system, forcing technical limitations which made OceanStore uncompetitive in comparison to other solutions. It is just much more easy and convenient to trust a given system. It should be noted that every system is compromisable, despite this mistrust. &lt;br /&gt;
&lt;br /&gt;
The public key system also reduces usability--if a user loses their key, they are completely out of luck and would need to acquire a new key. This also means that, if you wanted to remove their access over an object, you would have to re-encrypt the object with a new key and provide that key to said user, who would then have access to the object.&lt;br /&gt;
&lt;br /&gt;
With regards to the security, there is no security mechanism on the server side. The server can not know who is accessing the data. On the economic side, the economic model is unconvincing with the way it is defined. The authors suggest that a collection of companies will host OceanStore servers and consumers will buy capacity (not unlike web-hosting today).&lt;br /&gt;
&lt;br /&gt;
===Use Cases===&lt;br /&gt;
A subset of the features outlined for OceanStore already exist. For example, Blackberry and Google offer similar services. These current services are owned by one company, however, not many providers. You can also not sell back your services as a user (e.g., you can&#039;t sell your extra storage back to the utility).&lt;br /&gt;
&lt;br /&gt;
==Pond: What insights?==&lt;br /&gt;
In short: they actually built it! However, due to the untrusted assumption, they can&#039;t assume the use of any infrastructure, causing them to rebuild &#039;&#039;everything&#039;&#039;! It was built over the internet with Tapestry (routing) and GUID for object identification (object naming scheme).&lt;br /&gt;
&lt;br /&gt;
==Benchmarks==&lt;br /&gt;
In short: the system had really good read speed, really bad write speed.&lt;br /&gt;
&lt;br /&gt;
===Storage overhead===&lt;br /&gt;
One general question was how much are they increasing the storage needed to implement their storage model? The answer: a factor of 4.8x the space is needed (you&#039;ll have 1/5th the storage). While this is expensive, it does have a good value as your data is backed up, replicated, etc. However, it does cause one to consider how important it is to make an update as you burn more storage space as more updates are made. &lt;br /&gt;
&lt;br /&gt;
===Update performance===&lt;br /&gt;
None of the data is mutated--it is diffed and archived. You are essentially creating a new version of an object and then distributing that object.&lt;br /&gt;
&lt;br /&gt;
===Benchmarks in a nutshell===&lt;br /&gt;
Absolutely everything is expensive and there is high latency.&lt;br /&gt;
&lt;br /&gt;
==Other stuff==&lt;br /&gt;
&#039;&#039;&#039;Byzantine fault tolerance&#039;&#039;&#039;&lt;br /&gt;
* byzantine fault tolerant network replicates the data in such a way that even if m nodes out of total n nodes,in a network,fail, you would still be able to recover the whole data. but as you increase the value of number m, the required network messages to be exchanges also increases, so there is a tradeoff.&lt;br /&gt;
* You are assuming certain actors are malicious.&lt;br /&gt;
&#039;&#039;&#039;Bitcoin&#039;&#039;&#039;&lt;br /&gt;
* Trusted vs Untrusted.&lt;br /&gt;
* It is considered to be untrusted but it takes huge amount of trust when exchanges are made.&lt;br /&gt;
&lt;br /&gt;
==What&#039;s worth salvaging from the dream?==&lt;br /&gt;
Some of the good things that we can salvage are using spare resources in other locations. It can also be noted that similar routing systems are used in large peer to peer systems.&lt;br /&gt;
&lt;br /&gt;
==How to read a research paper==&lt;br /&gt;
# Start with the Introduction to figure out what the problem is.&lt;br /&gt;
# See/read through the related work/background for context of the paper.&lt;br /&gt;
# Go to the conclusion and focus on the results (i.e., figure out what they actually did).&lt;br /&gt;
# Fill in the gaps by reading specific parts of the body.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_14&amp;diff=18765</id>
		<title>DistOS 2014W Lecture 14</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_14&amp;diff=18765"/>
		<updated>2014-03-09T15:37:59Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* What&amp;#039;s worth salvaging from the dream? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=OceanStore=&lt;br /&gt;
&lt;br /&gt;
==What is the dream?==&lt;br /&gt;
The dream was to create a persistent storage system that had high availability and was universally accessibly--a global, ubiquitous persistent data storage solution. OceanStore was meant to be utility managed by multiple parties, with no one party having total control/monopoly over the system. To support the goal of high availability, there was a high amount of redundancy and fault-tolerance. For high persistence, everything was archived--nothing was ever truly deleted. This can be likened to working in version control with &amp;quot;Commits&amp;quot;. This is possibly due to the realization that the easier it is to delete things, the easier it is to lose things.&lt;br /&gt;
&lt;br /&gt;
The basic assumption made by the designers of OceanStore, however, was that none of the servers could be trusted. To support this, the system held only opaque/encrypted data. As such, the system could be used for more than files (e.g., for whole databases). &lt;br /&gt;
&lt;br /&gt;
The system utilized nomadic data, meaning that data could be cached anywhere, unlike with NFS and AFS where only specific servers can cache the data.&lt;br /&gt;
&lt;br /&gt;
==Why did the dream die?==&lt;br /&gt;
&lt;br /&gt;
The biggest reason that caused the OceanStore dream to die was the assumption of mistrusting all the actors--everything else they did was right. This assumption, however, caused the system to become needlessly complicated as they had to rebuild &#039;&#039;everything&#039;&#039; to accommodate this assumption. This was also unrealistic as this is not an assumption that is generally made (i.e., it is normally assumed that at least some of the actors can be trusted). Other successful distributed systems are built on a more trusted model. In short, the solution that accommodates untrusted actors assumption is just too expensive.&lt;br /&gt;
&lt;br /&gt;
=== Technology ===&lt;br /&gt;
As outlined above, the trust model (read: fundamentally untrusted model) is the most attractive feature which ultimately killed it. The untrusted assumption introduced a huge burden on the system, forcing technical limitations which made OceanStore uncompetitive in comparison to other solutions. It is just much more easy and convenient to trust a given system. It should be noted that every system is compromisable, despite this mistrust. &lt;br /&gt;
&lt;br /&gt;
The public key system also reduces usability--if a user loses their key, they are completely out of luck and would need to acquire a new key. This also means that, if you wanted to remove their access over an object, you would have to re-encrypt the object with a new key and provide that key to said user, who would then have access to the object.&lt;br /&gt;
&lt;br /&gt;
With regards to the security, there is no security mechanism on the server side. The server can not know who is accessing the data. On the economic side, the economic model is unconvincing with the way it is defined. The authors suggest that a collection of companies will host OceanStore servers and consumers will buy capacity (not unlike web-hosting today).&lt;br /&gt;
&lt;br /&gt;
===Use Cases===&lt;br /&gt;
A subset of the features outlined for OceanStore already exist. For example, Blackberry and Google offer similar services. These current services are owned by one company, however, not many providers. You can also not sell back your services as a user (e.g., you can&#039;t sell your extra storage back to the utility).&lt;br /&gt;
&lt;br /&gt;
==Pond: What insights?==&lt;br /&gt;
In short: they actually built it! However, due to the untrusted assumption, they can&#039;t assume the use of any infrastructure, causing them to rebuild &#039;&#039;everything&#039;&#039;! It was built over the internet with Tapestry (routing) and GUID for object identification (object naming scheme).&lt;br /&gt;
&lt;br /&gt;
==Benchmarks==&lt;br /&gt;
In short: the system had really good read speed, really bad write speed.&lt;br /&gt;
&lt;br /&gt;
===Storage overhead===&lt;br /&gt;
One general question was how much are they increasing the storage needed to implement their storage model? The answer: a factor of 4.8x the space is needed (you&#039;ll have 1/5th the storage). While this is expensive, it does have a good value as your data is backed up, replicated, etc. However, it does cause one to consider how important it is to make an update as you burn more storage space as more updates are made. &lt;br /&gt;
&lt;br /&gt;
===Update performance===&lt;br /&gt;
None of the data is mutated--it is diffed and archived. You are essentially creating a new version of an object and then distributing that object.&lt;br /&gt;
&lt;br /&gt;
===Benchmarks in a nutshell===&lt;br /&gt;
Absolutely everything is expensive and there is high latency.&lt;br /&gt;
&lt;br /&gt;
==Other stuff==&lt;br /&gt;
&#039;&#039;&#039;Byzantine fault tolerance&#039;&#039;&#039;&lt;br /&gt;
* byzantine fault tolerant network replicates the data in such a way that even if m nodes out of total n nodes,in a network,fail, you would still be able to recover the whole data. but as you increase the value of number m, the required network messages to be exchanges also increases, so there is a tradeoff.&lt;br /&gt;
* You are assuming certain actors are malicious.&lt;br /&gt;
&#039;&#039;&#039;Bitcoin&#039;&#039;&#039;&lt;br /&gt;
* Trusted vs Untrusted.&lt;br /&gt;
* It is considered to be untrusted but it takes huge amount of trust when exchanges are made.&lt;br /&gt;
&lt;br /&gt;
==What&#039;s worth salvaging from the dream?==&lt;br /&gt;
Some of the good things that we can salvage are using spare resources in other locations. It can also be noted that similar routing systems are used in large peer to peer systems.&lt;br /&gt;
&lt;br /&gt;
==How to read a research paper==&lt;br /&gt;
* Start with Intro&lt;br /&gt;
** Figure out what the problem is&lt;br /&gt;
* then see the related work for context&lt;br /&gt;
* then go to conclusion. Focus on results.&lt;br /&gt;
* then fill in the gaps by reading specific parts of the body&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_14&amp;diff=18764</id>
		<title>DistOS 2014W Lecture 14</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_14&amp;diff=18764"/>
		<updated>2014-03-09T15:37:09Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Other stuff */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=OceanStore=&lt;br /&gt;
&lt;br /&gt;
==What is the dream?==&lt;br /&gt;
The dream was to create a persistent storage system that had high availability and was universally accessibly--a global, ubiquitous persistent data storage solution. OceanStore was meant to be utility managed by multiple parties, with no one party having total control/monopoly over the system. To support the goal of high availability, there was a high amount of redundancy and fault-tolerance. For high persistence, everything was archived--nothing was ever truly deleted. This can be likened to working in version control with &amp;quot;Commits&amp;quot;. This is possibly due to the realization that the easier it is to delete things, the easier it is to lose things.&lt;br /&gt;
&lt;br /&gt;
The basic assumption made by the designers of OceanStore, however, was that none of the servers could be trusted. To support this, the system held only opaque/encrypted data. As such, the system could be used for more than files (e.g., for whole databases). &lt;br /&gt;
&lt;br /&gt;
The system utilized nomadic data, meaning that data could be cached anywhere, unlike with NFS and AFS where only specific servers can cache the data.&lt;br /&gt;
&lt;br /&gt;
==Why did the dream die?==&lt;br /&gt;
&lt;br /&gt;
The biggest reason that caused the OceanStore dream to die was the assumption of mistrusting all the actors--everything else they did was right. This assumption, however, caused the system to become needlessly complicated as they had to rebuild &#039;&#039;everything&#039;&#039; to accommodate this assumption. This was also unrealistic as this is not an assumption that is generally made (i.e., it is normally assumed that at least some of the actors can be trusted). Other successful distributed systems are built on a more trusted model. In short, the solution that accommodates untrusted actors assumption is just too expensive.&lt;br /&gt;
&lt;br /&gt;
=== Technology ===&lt;br /&gt;
As outlined above, the trust model (read: fundamentally untrusted model) is the most attractive feature which ultimately killed it. The untrusted assumption introduced a huge burden on the system, forcing technical limitations which made OceanStore uncompetitive in comparison to other solutions. It is just much more easy and convenient to trust a given system. It should be noted that every system is compromisable, despite this mistrust. &lt;br /&gt;
&lt;br /&gt;
The public key system also reduces usability--if a user loses their key, they are completely out of luck and would need to acquire a new key. This also means that, if you wanted to remove their access over an object, you would have to re-encrypt the object with a new key and provide that key to said user, who would then have access to the object.&lt;br /&gt;
&lt;br /&gt;
With regards to the security, there is no security mechanism on the server side. The server can not know who is accessing the data. On the economic side, the economic model is unconvincing with the way it is defined. The authors suggest that a collection of companies will host OceanStore servers and consumers will buy capacity (not unlike web-hosting today).&lt;br /&gt;
&lt;br /&gt;
===Use Cases===&lt;br /&gt;
A subset of the features outlined for OceanStore already exist. For example, Blackberry and Google offer similar services. These current services are owned by one company, however, not many providers. You can also not sell back your services as a user (e.g., you can&#039;t sell your extra storage back to the utility).&lt;br /&gt;
&lt;br /&gt;
==Pond: What insights?==&lt;br /&gt;
In short: they actually built it! However, due to the untrusted assumption, they can&#039;t assume the use of any infrastructure, causing them to rebuild &#039;&#039;everything&#039;&#039;! It was built over the internet with Tapestry (routing) and GUID for object identification (object naming scheme).&lt;br /&gt;
&lt;br /&gt;
==Benchmarks==&lt;br /&gt;
In short: the system had really good read speed, really bad write speed.&lt;br /&gt;
&lt;br /&gt;
===Storage overhead===&lt;br /&gt;
One general question was how much are they increasing the storage needed to implement their storage model? The answer: a factor of 4.8x the space is needed (you&#039;ll have 1/5th the storage). While this is expensive, it does have a good value as your data is backed up, replicated, etc. However, it does cause one to consider how important it is to make an update as you burn more storage space as more updates are made. &lt;br /&gt;
&lt;br /&gt;
===Update performance===&lt;br /&gt;
None of the data is mutated--it is diffed and archived. You are essentially creating a new version of an object and then distributing that object.&lt;br /&gt;
&lt;br /&gt;
===Benchmarks in a nutshell===&lt;br /&gt;
Absolutely everything is expensive and there is high latency.&lt;br /&gt;
&lt;br /&gt;
==Other stuff==&lt;br /&gt;
&#039;&#039;&#039;Byzantine fault tolerance&#039;&#039;&#039;&lt;br /&gt;
* byzantine fault tolerant network replicates the data in such a way that even if m nodes out of total n nodes,in a network,fail, you would still be able to recover the whole data. but as you increase the value of number m, the required network messages to be exchanges also increases, so there is a tradeoff.&lt;br /&gt;
* You are assuming certain actors are malicious.&lt;br /&gt;
&#039;&#039;&#039;Bitcoin&#039;&#039;&#039;&lt;br /&gt;
* Trusted vs Untrusted.&lt;br /&gt;
* It is considered to be untrusted but it takes huge amount of trust when exchanges are made.&lt;br /&gt;
&lt;br /&gt;
==What&#039;s worth salvaging from the dream?==&lt;br /&gt;
* Using spare resources in other locations.&lt;br /&gt;
* Similar routing system are used in large peer to peer systems.&lt;br /&gt;
&lt;br /&gt;
==How to read a research paper==&lt;br /&gt;
* Start with Intro&lt;br /&gt;
** Figure out what the problem is&lt;br /&gt;
* then see the related work for context&lt;br /&gt;
* then go to conclusion. Focus on results.&lt;br /&gt;
* then fill in the gaps by reading specific parts of the body&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_14&amp;diff=18763</id>
		<title>DistOS 2014W Lecture 14</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_14&amp;diff=18763"/>
		<updated>2014-03-09T15:36:00Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Benchmarks */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=OceanStore=&lt;br /&gt;
&lt;br /&gt;
==What is the dream?==&lt;br /&gt;
The dream was to create a persistent storage system that had high availability and was universally accessibly--a global, ubiquitous persistent data storage solution. OceanStore was meant to be utility managed by multiple parties, with no one party having total control/monopoly over the system. To support the goal of high availability, there was a high amount of redundancy and fault-tolerance. For high persistence, everything was archived--nothing was ever truly deleted. This can be likened to working in version control with &amp;quot;Commits&amp;quot;. This is possibly due to the realization that the easier it is to delete things, the easier it is to lose things.&lt;br /&gt;
&lt;br /&gt;
The basic assumption made by the designers of OceanStore, however, was that none of the servers could be trusted. To support this, the system held only opaque/encrypted data. As such, the system could be used for more than files (e.g., for whole databases). &lt;br /&gt;
&lt;br /&gt;
The system utilized nomadic data, meaning that data could be cached anywhere, unlike with NFS and AFS where only specific servers can cache the data.&lt;br /&gt;
&lt;br /&gt;
==Why did the dream die?==&lt;br /&gt;
&lt;br /&gt;
The biggest reason that caused the OceanStore dream to die was the assumption of mistrusting all the actors--everything else they did was right. This assumption, however, caused the system to become needlessly complicated as they had to rebuild &#039;&#039;everything&#039;&#039; to accommodate this assumption. This was also unrealistic as this is not an assumption that is generally made (i.e., it is normally assumed that at least some of the actors can be trusted). Other successful distributed systems are built on a more trusted model. In short, the solution that accommodates untrusted actors assumption is just too expensive.&lt;br /&gt;
&lt;br /&gt;
=== Technology ===&lt;br /&gt;
As outlined above, the trust model (read: fundamentally untrusted model) is the most attractive feature which ultimately killed it. The untrusted assumption introduced a huge burden on the system, forcing technical limitations which made OceanStore uncompetitive in comparison to other solutions. It is just much more easy and convenient to trust a given system. It should be noted that every system is compromisable, despite this mistrust. &lt;br /&gt;
&lt;br /&gt;
The public key system also reduces usability--if a user loses their key, they are completely out of luck and would need to acquire a new key. This also means that, if you wanted to remove their access over an object, you would have to re-encrypt the object with a new key and provide that key to said user, who would then have access to the object.&lt;br /&gt;
&lt;br /&gt;
With regards to the security, there is no security mechanism on the server side. The server can not know who is accessing the data. On the economic side, the economic model is unconvincing with the way it is defined. The authors suggest that a collection of companies will host OceanStore servers and consumers will buy capacity (not unlike web-hosting today).&lt;br /&gt;
&lt;br /&gt;
===Use Cases===&lt;br /&gt;
A subset of the features outlined for OceanStore already exist. For example, Blackberry and Google offer similar services. These current services are owned by one company, however, not many providers. You can also not sell back your services as a user (e.g., you can&#039;t sell your extra storage back to the utility).&lt;br /&gt;
&lt;br /&gt;
==Pond: What insights?==&lt;br /&gt;
In short: they actually built it! However, due to the untrusted assumption, they can&#039;t assume the use of any infrastructure, causing them to rebuild &#039;&#039;everything&#039;&#039;! It was built over the internet with Tapestry (routing) and GUID for object identification (object naming scheme).&lt;br /&gt;
&lt;br /&gt;
==Benchmarks==&lt;br /&gt;
In short: the system had really good read speed, really bad write speed.&lt;br /&gt;
&lt;br /&gt;
===Storage overhead===&lt;br /&gt;
One general question was how much are they increasing the storage needed to implement their storage model? The answer: a factor of 4.8x the space is needed (you&#039;ll have 1/5th the storage). While this is expensive, it does have a good value as your data is backed up, replicated, etc. However, it does cause one to consider how important it is to make an update as you burn more storage space as more updates are made. &lt;br /&gt;
&lt;br /&gt;
===Update performance===&lt;br /&gt;
None of the data is mutated--it is diffed and archived. You are essentially creating a new version of an object and then distributing that object.&lt;br /&gt;
&lt;br /&gt;
===Benchmarks in a nutshell===&lt;br /&gt;
Absolutely everything is expensive and there is high latency.&lt;br /&gt;
&lt;br /&gt;
==Other stuff==&lt;br /&gt;
* Byzantine fault tolerance&lt;br /&gt;
** byzantine fault tolerant network replicates the data in such a way that even if m nodes out of total n nodes,in a network,fail, you would still be able to recover the whole data. but as you increase the value of number m, the required network messages to be exchanges also increases, so there is a tradeoff.&lt;br /&gt;
** Assuming certain actors are malicious&lt;br /&gt;
* Bitcoin&lt;br /&gt;
** Trusted vs Untrusted.&lt;br /&gt;
** It is considered to be untrusted but it takes huge amount of trust when exchanges are made.&lt;br /&gt;
&lt;br /&gt;
==What&#039;s worth salvaging from the dream?==&lt;br /&gt;
* Using spare resources in other locations.&lt;br /&gt;
* Similar routing system are used in large peer to peer systems.&lt;br /&gt;
&lt;br /&gt;
==How to read a research paper==&lt;br /&gt;
* Start with Intro&lt;br /&gt;
** Figure out what the problem is&lt;br /&gt;
* then see the related work for context&lt;br /&gt;
* then go to conclusion. Focus on results.&lt;br /&gt;
* then fill in the gaps by reading specific parts of the body&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_14&amp;diff=18762</id>
		<title>DistOS 2014W Lecture 14</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_14&amp;diff=18762"/>
		<updated>2014-03-09T15:33:18Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Pond: What insights? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=OceanStore=&lt;br /&gt;
&lt;br /&gt;
==What is the dream?==&lt;br /&gt;
The dream was to create a persistent storage system that had high availability and was universally accessibly--a global, ubiquitous persistent data storage solution. OceanStore was meant to be utility managed by multiple parties, with no one party having total control/monopoly over the system. To support the goal of high availability, there was a high amount of redundancy and fault-tolerance. For high persistence, everything was archived--nothing was ever truly deleted. This can be likened to working in version control with &amp;quot;Commits&amp;quot;. This is possibly due to the realization that the easier it is to delete things, the easier it is to lose things.&lt;br /&gt;
&lt;br /&gt;
The basic assumption made by the designers of OceanStore, however, was that none of the servers could be trusted. To support this, the system held only opaque/encrypted data. As such, the system could be used for more than files (e.g., for whole databases). &lt;br /&gt;
&lt;br /&gt;
The system utilized nomadic data, meaning that data could be cached anywhere, unlike with NFS and AFS where only specific servers can cache the data.&lt;br /&gt;
&lt;br /&gt;
==Why did the dream die?==&lt;br /&gt;
&lt;br /&gt;
The biggest reason that caused the OceanStore dream to die was the assumption of mistrusting all the actors--everything else they did was right. This assumption, however, caused the system to become needlessly complicated as they had to rebuild &#039;&#039;everything&#039;&#039; to accommodate this assumption. This was also unrealistic as this is not an assumption that is generally made (i.e., it is normally assumed that at least some of the actors can be trusted). Other successful distributed systems are built on a more trusted model. In short, the solution that accommodates untrusted actors assumption is just too expensive.&lt;br /&gt;
&lt;br /&gt;
=== Technology ===&lt;br /&gt;
As outlined above, the trust model (read: fundamentally untrusted model) is the most attractive feature which ultimately killed it. The untrusted assumption introduced a huge burden on the system, forcing technical limitations which made OceanStore uncompetitive in comparison to other solutions. It is just much more easy and convenient to trust a given system. It should be noted that every system is compromisable, despite this mistrust. &lt;br /&gt;
&lt;br /&gt;
The public key system also reduces usability--if a user loses their key, they are completely out of luck and would need to acquire a new key. This also means that, if you wanted to remove their access over an object, you would have to re-encrypt the object with a new key and provide that key to said user, who would then have access to the object.&lt;br /&gt;
&lt;br /&gt;
With regards to the security, there is no security mechanism on the server side. The server can not know who is accessing the data. On the economic side, the economic model is unconvincing with the way it is defined. The authors suggest that a collection of companies will host OceanStore servers and consumers will buy capacity (not unlike web-hosting today).&lt;br /&gt;
&lt;br /&gt;
===Use Cases===&lt;br /&gt;
A subset of the features outlined for OceanStore already exist. For example, Blackberry and Google offer similar services. These current services are owned by one company, however, not many providers. You can also not sell back your services as a user (e.g., you can&#039;t sell your extra storage back to the utility).&lt;br /&gt;
&lt;br /&gt;
==Pond: What insights?==&lt;br /&gt;
In short: they actually built it! However, due to the untrusted assumption, they can&#039;t assume the use of any infrastructure, causing them to rebuild &#039;&#039;everything&#039;&#039;! It was built over the internet with Tapestry (routing) and GUID for object identification (object naming scheme).&lt;br /&gt;
&lt;br /&gt;
==Benchmarks==&lt;br /&gt;
* Really good read speed, really bad write speed.&lt;br /&gt;
&lt;br /&gt;
===Storage overhead===&lt;br /&gt;
* How much are they increasing the storage needed to implement their storage model.&lt;br /&gt;
* Factor of 4.8x the space needed (you&#039;ll have 1/5th the  storage)&lt;br /&gt;
* Expensive, but good value (data is backed up, replicated, etc..)&lt;br /&gt;
* Considerations of importance before making an update&lt;br /&gt;
** burn more storage space as more updates are made&lt;br /&gt;
&lt;br /&gt;
===Update performance===&lt;br /&gt;
* No data is mutated. It is diffed and archived.&lt;br /&gt;
* Creating a new version of an object and distributing that object.&lt;br /&gt;
&lt;br /&gt;
===Benchmarks in a nutshell===&lt;br /&gt;
* Everything is expensive!&lt;br /&gt;
* High latency&lt;br /&gt;
&lt;br /&gt;
==Other stuff==&lt;br /&gt;
* Byzantine fault tolerance&lt;br /&gt;
** byzantine fault tolerant network replicates the data in such a way that even if m nodes out of total n nodes,in a network,fail, you would still be able to recover the whole data. but as you increase the value of number m, the required network messages to be exchanges also increases, so there is a tradeoff.&lt;br /&gt;
** Assuming certain actors are malicious&lt;br /&gt;
* Bitcoin&lt;br /&gt;
** Trusted vs Untrusted.&lt;br /&gt;
** It is considered to be untrusted but it takes huge amount of trust when exchanges are made.&lt;br /&gt;
&lt;br /&gt;
==What&#039;s worth salvaging from the dream?==&lt;br /&gt;
* Using spare resources in other locations.&lt;br /&gt;
* Similar routing system are used in large peer to peer systems.&lt;br /&gt;
&lt;br /&gt;
==How to read a research paper==&lt;br /&gt;
* Start with Intro&lt;br /&gt;
** Figure out what the problem is&lt;br /&gt;
* then see the related work for context&lt;br /&gt;
* then go to conclusion. Focus on results.&lt;br /&gt;
* then fill in the gaps by reading specific parts of the body&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_14&amp;diff=18761</id>
		<title>DistOS 2014W Lecture 14</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_14&amp;diff=18761"/>
		<updated>2014-03-09T15:32:03Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Use Cases */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=OceanStore=&lt;br /&gt;
&lt;br /&gt;
==What is the dream?==&lt;br /&gt;
The dream was to create a persistent storage system that had high availability and was universally accessibly--a global, ubiquitous persistent data storage solution. OceanStore was meant to be utility managed by multiple parties, with no one party having total control/monopoly over the system. To support the goal of high availability, there was a high amount of redundancy and fault-tolerance. For high persistence, everything was archived--nothing was ever truly deleted. This can be likened to working in version control with &amp;quot;Commits&amp;quot;. This is possibly due to the realization that the easier it is to delete things, the easier it is to lose things.&lt;br /&gt;
&lt;br /&gt;
The basic assumption made by the designers of OceanStore, however, was that none of the servers could be trusted. To support this, the system held only opaque/encrypted data. As such, the system could be used for more than files (e.g., for whole databases). &lt;br /&gt;
&lt;br /&gt;
The system utilized nomadic data, meaning that data could be cached anywhere, unlike with NFS and AFS where only specific servers can cache the data.&lt;br /&gt;
&lt;br /&gt;
==Why did the dream die?==&lt;br /&gt;
&lt;br /&gt;
The biggest reason that caused the OceanStore dream to die was the assumption of mistrusting all the actors--everything else they did was right. This assumption, however, caused the system to become needlessly complicated as they had to rebuild &#039;&#039;everything&#039;&#039; to accommodate this assumption. This was also unrealistic as this is not an assumption that is generally made (i.e., it is normally assumed that at least some of the actors can be trusted). Other successful distributed systems are built on a more trusted model. In short, the solution that accommodates untrusted actors assumption is just too expensive.&lt;br /&gt;
&lt;br /&gt;
=== Technology ===&lt;br /&gt;
As outlined above, the trust model (read: fundamentally untrusted model) is the most attractive feature which ultimately killed it. The untrusted assumption introduced a huge burden on the system, forcing technical limitations which made OceanStore uncompetitive in comparison to other solutions. It is just much more easy and convenient to trust a given system. It should be noted that every system is compromisable, despite this mistrust. &lt;br /&gt;
&lt;br /&gt;
The public key system also reduces usability--if a user loses their key, they are completely out of luck and would need to acquire a new key. This also means that, if you wanted to remove their access over an object, you would have to re-encrypt the object with a new key and provide that key to said user, who would then have access to the object.&lt;br /&gt;
&lt;br /&gt;
With regards to the security, there is no security mechanism on the server side. The server can not know who is accessing the data. On the economic side, the economic model is unconvincing with the way it is defined. The authors suggest that a collection of companies will host OceanStore servers and consumers will buy capacity (not unlike web-hosting today).&lt;br /&gt;
&lt;br /&gt;
===Use Cases===&lt;br /&gt;
A subset of the features outlined for OceanStore already exist. For example, Blackberry and Google offer similar services. These current services are owned by one company, however, not many providers. You can also not sell back your services as a user (e.g., you can&#039;t sell your extra storage back to the utility).&lt;br /&gt;
&lt;br /&gt;
==Pond: What insights?==&lt;br /&gt;
&lt;br /&gt;
* They actually built it.&lt;br /&gt;
* Can&#039;t assume the use of any infrastructure, so they rebuild everything!&lt;br /&gt;
** Built over the internet.&lt;br /&gt;
** Tapestry (routing).&lt;br /&gt;
** GUID for object indentification. Object naming scheme.&lt;br /&gt;
&lt;br /&gt;
==Benchmarks==&lt;br /&gt;
* Really good read speed, really bad write speed.&lt;br /&gt;
&lt;br /&gt;
===Storage overhead===&lt;br /&gt;
* How much are they increasing the storage needed to implement their storage model.&lt;br /&gt;
* Factor of 4.8x the space needed (you&#039;ll have 1/5th the  storage)&lt;br /&gt;
* Expensive, but good value (data is backed up, replicated, etc..)&lt;br /&gt;
* Considerations of importance before making an update&lt;br /&gt;
** burn more storage space as more updates are made&lt;br /&gt;
&lt;br /&gt;
===Update performance===&lt;br /&gt;
* No data is mutated. It is diffed and archived.&lt;br /&gt;
* Creating a new version of an object and distributing that object.&lt;br /&gt;
&lt;br /&gt;
===Benchmarks in a nutshell===&lt;br /&gt;
* Everything is expensive!&lt;br /&gt;
* High latency&lt;br /&gt;
&lt;br /&gt;
==Other stuff==&lt;br /&gt;
* Byzantine fault tolerance&lt;br /&gt;
** byzantine fault tolerant network replicates the data in such a way that even if m nodes out of total n nodes,in a network,fail, you would still be able to recover the whole data. but as you increase the value of number m, the required network messages to be exchanges also increases, so there is a tradeoff.&lt;br /&gt;
** Assuming certain actors are malicious&lt;br /&gt;
* Bitcoin&lt;br /&gt;
** Trusted vs Untrusted.&lt;br /&gt;
** It is considered to be untrusted but it takes huge amount of trust when exchanges are made.&lt;br /&gt;
&lt;br /&gt;
==What&#039;s worth salvaging from the dream?==&lt;br /&gt;
* Using spare resources in other locations.&lt;br /&gt;
* Similar routing system are used in large peer to peer systems.&lt;br /&gt;
&lt;br /&gt;
==How to read a research paper==&lt;br /&gt;
* Start with Intro&lt;br /&gt;
** Figure out what the problem is&lt;br /&gt;
* then see the related work for context&lt;br /&gt;
* then go to conclusion. Focus on results.&lt;br /&gt;
* then fill in the gaps by reading specific parts of the body&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_14&amp;diff=18760</id>
		<title>DistOS 2014W Lecture 14</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_14&amp;diff=18760"/>
		<updated>2014-03-09T15:30:36Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Technology */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=OceanStore=&lt;br /&gt;
&lt;br /&gt;
==What is the dream?==&lt;br /&gt;
The dream was to create a persistent storage system that had high availability and was universally accessibly--a global, ubiquitous persistent data storage solution. OceanStore was meant to be utility managed by multiple parties, with no one party having total control/monopoly over the system. To support the goal of high availability, there was a high amount of redundancy and fault-tolerance. For high persistence, everything was archived--nothing was ever truly deleted. This can be likened to working in version control with &amp;quot;Commits&amp;quot;. This is possibly due to the realization that the easier it is to delete things, the easier it is to lose things.&lt;br /&gt;
&lt;br /&gt;
The basic assumption made by the designers of OceanStore, however, was that none of the servers could be trusted. To support this, the system held only opaque/encrypted data. As such, the system could be used for more than files (e.g., for whole databases). &lt;br /&gt;
&lt;br /&gt;
The system utilized nomadic data, meaning that data could be cached anywhere, unlike with NFS and AFS where only specific servers can cache the data.&lt;br /&gt;
&lt;br /&gt;
==Why did the dream die?==&lt;br /&gt;
&lt;br /&gt;
The biggest reason that caused the OceanStore dream to die was the assumption of mistrusting all the actors--everything else they did was right. This assumption, however, caused the system to become needlessly complicated as they had to rebuild &#039;&#039;everything&#039;&#039; to accommodate this assumption. This was also unrealistic as this is not an assumption that is generally made (i.e., it is normally assumed that at least some of the actors can be trusted). Other successful distributed systems are built on a more trusted model. In short, the solution that accommodates untrusted actors assumption is just too expensive.&lt;br /&gt;
&lt;br /&gt;
=== Technology ===&lt;br /&gt;
As outlined above, the trust model (read: fundamentally untrusted model) is the most attractive feature which ultimately killed it. The untrusted assumption introduced a huge burden on the system, forcing technical limitations which made OceanStore uncompetitive in comparison to other solutions. It is just much more easy and convenient to trust a given system. It should be noted that every system is compromisable, despite this mistrust. &lt;br /&gt;
&lt;br /&gt;
The public key system also reduces usability--if a user loses their key, they are completely out of luck and would need to acquire a new key. This also means that, if you wanted to remove their access over an object, you would have to re-encrypt the object with a new key and provide that key to said user, who would then have access to the object.&lt;br /&gt;
&lt;br /&gt;
With regards to the security, there is no security mechanism on the server side. The server can not know who is accessing the data. On the economic side, the economic model is unconvincing with the way it is defined. The authors suggest that a collection of companies will host OceanStore servers and consumers will buy capacity (not unlike web-hosting today).&lt;br /&gt;
&lt;br /&gt;
===Use Cases===&lt;br /&gt;
* Subset of the features already exist&lt;br /&gt;
** Blackberry and Google offer similar services.&lt;br /&gt;
** These current services owned by one company, not many providers.&lt;br /&gt;
** Can not sell back your services as a user.&lt;br /&gt;
*** ex. Can not sell your extra storage back to the utility.&lt;br /&gt;
&lt;br /&gt;
==Pond: What insights?==&lt;br /&gt;
&lt;br /&gt;
* They actually built it.&lt;br /&gt;
* Can&#039;t assume the use of any infrastructure, so they rebuild everything!&lt;br /&gt;
** Built over the internet.&lt;br /&gt;
** Tapestry (routing).&lt;br /&gt;
** GUID for object indentification. Object naming scheme.&lt;br /&gt;
&lt;br /&gt;
==Benchmarks==&lt;br /&gt;
* Really good read speed, really bad write speed.&lt;br /&gt;
&lt;br /&gt;
===Storage overhead===&lt;br /&gt;
* How much are they increasing the storage needed to implement their storage model.&lt;br /&gt;
* Factor of 4.8x the space needed (you&#039;ll have 1/5th the  storage)&lt;br /&gt;
* Expensive, but good value (data is backed up, replicated, etc..)&lt;br /&gt;
* Considerations of importance before making an update&lt;br /&gt;
** burn more storage space as more updates are made&lt;br /&gt;
&lt;br /&gt;
===Update performance===&lt;br /&gt;
* No data is mutated. It is diffed and archived.&lt;br /&gt;
* Creating a new version of an object and distributing that object.&lt;br /&gt;
&lt;br /&gt;
===Benchmarks in a nutshell===&lt;br /&gt;
* Everything is expensive!&lt;br /&gt;
* High latency&lt;br /&gt;
&lt;br /&gt;
==Other stuff==&lt;br /&gt;
* Byzantine fault tolerance&lt;br /&gt;
** byzantine fault tolerant network replicates the data in such a way that even if m nodes out of total n nodes,in a network,fail, you would still be able to recover the whole data. but as you increase the value of number m, the required network messages to be exchanges also increases, so there is a tradeoff.&lt;br /&gt;
** Assuming certain actors are malicious&lt;br /&gt;
* Bitcoin&lt;br /&gt;
** Trusted vs Untrusted.&lt;br /&gt;
** It is considered to be untrusted but it takes huge amount of trust when exchanges are made.&lt;br /&gt;
&lt;br /&gt;
==What&#039;s worth salvaging from the dream?==&lt;br /&gt;
* Using spare resources in other locations.&lt;br /&gt;
* Similar routing system are used in large peer to peer systems.&lt;br /&gt;
&lt;br /&gt;
==How to read a research paper==&lt;br /&gt;
* Start with Intro&lt;br /&gt;
** Figure out what the problem is&lt;br /&gt;
* then see the related work for context&lt;br /&gt;
* then go to conclusion. Focus on results.&lt;br /&gt;
* then fill in the gaps by reading specific parts of the body&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_14&amp;diff=18759</id>
		<title>DistOS 2014W Lecture 14</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_14&amp;diff=18759"/>
		<updated>2014-03-09T15:25:24Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Why did the dream die? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=OceanStore=&lt;br /&gt;
&lt;br /&gt;
==What is the dream?==&lt;br /&gt;
The dream was to create a persistent storage system that had high availability and was universally accessibly--a global, ubiquitous persistent data storage solution. OceanStore was meant to be utility managed by multiple parties, with no one party having total control/monopoly over the system. To support the goal of high availability, there was a high amount of redundancy and fault-tolerance. For high persistence, everything was archived--nothing was ever truly deleted. This can be likened to working in version control with &amp;quot;Commits&amp;quot;. This is possibly due to the realization that the easier it is to delete things, the easier it is to lose things.&lt;br /&gt;
&lt;br /&gt;
The basic assumption made by the designers of OceanStore, however, was that none of the servers could be trusted. To support this, the system held only opaque/encrypted data. As such, the system could be used for more than files (e.g., for whole databases). &lt;br /&gt;
&lt;br /&gt;
The system utilized nomadic data, meaning that data could be cached anywhere, unlike with NFS and AFS where only specific servers can cache the data.&lt;br /&gt;
&lt;br /&gt;
==Why did the dream die?==&lt;br /&gt;
&lt;br /&gt;
The biggest reason that caused the OceanStore dream to die was the assumption of mistrusting all the actors--everything else they did was right. This assumption, however, caused the system to become needlessly complicated as they had to rebuild &#039;&#039;everything&#039;&#039; to accommodate this assumption. This was also unrealistic as this is not an assumption that is generally made (i.e., it is normally assumed that at least some of the actors can be trusted). Other successful distributed systems are built on a more trusted model. In short, the solution that accommodates untrusted actors assumption is just too expensive.&lt;br /&gt;
&lt;br /&gt;
=== Technology ===&lt;br /&gt;
* The trust model is the most attractive feature which ultimately killed it.&lt;br /&gt;
** The untrusted assumption was a huge burden on the system. Forced technical limitations made them uncompetitive.&lt;br /&gt;
** It is just easier to trust a given system. More convenient.&lt;br /&gt;
** Every system is compromisable despite this mistrust&lt;br /&gt;
* Pub key system reduces usability&lt;br /&gt;
** If you loose your key, you&#039;re S.O.L.&lt;br /&gt;
** If you wanted to remove someone&#039; access over an object, you would have to re-encrypt the object with a new key and provide the key to user who wtill have access to object &lt;br /&gt;
*security&lt;br /&gt;
**there is no security mechanism in servers side.&lt;br /&gt;
**can not now who access the data&lt;br /&gt;
*economic side&lt;br /&gt;
**The economic model is unconvincing as defined.  The authors suggest that a collection of companies will host OceanStore servers, and consumers will buy capacity (not unlike web-hosting of today).&lt;br /&gt;
&lt;br /&gt;
===Use Cases===&lt;br /&gt;
* Subset of the features already exist&lt;br /&gt;
** Blackberry and Google offer similar services.&lt;br /&gt;
** These current services owned by one company, not many providers.&lt;br /&gt;
** Can not sell back your services as a user.&lt;br /&gt;
*** ex. Can not sell your extra storage back to the utility.&lt;br /&gt;
&lt;br /&gt;
==Pond: What insights?==&lt;br /&gt;
&lt;br /&gt;
* They actually built it.&lt;br /&gt;
* Can&#039;t assume the use of any infrastructure, so they rebuild everything!&lt;br /&gt;
** Built over the internet.&lt;br /&gt;
** Tapestry (routing).&lt;br /&gt;
** GUID for object indentification. Object naming scheme.&lt;br /&gt;
&lt;br /&gt;
==Benchmarks==&lt;br /&gt;
* Really good read speed, really bad write speed.&lt;br /&gt;
&lt;br /&gt;
===Storage overhead===&lt;br /&gt;
* How much are they increasing the storage needed to implement their storage model.&lt;br /&gt;
* Factor of 4.8x the space needed (you&#039;ll have 1/5th the  storage)&lt;br /&gt;
* Expensive, but good value (data is backed up, replicated, etc..)&lt;br /&gt;
* Considerations of importance before making an update&lt;br /&gt;
** burn more storage space as more updates are made&lt;br /&gt;
&lt;br /&gt;
===Update performance===&lt;br /&gt;
* No data is mutated. It is diffed and archived.&lt;br /&gt;
* Creating a new version of an object and distributing that object.&lt;br /&gt;
&lt;br /&gt;
===Benchmarks in a nutshell===&lt;br /&gt;
* Everything is expensive!&lt;br /&gt;
* High latency&lt;br /&gt;
&lt;br /&gt;
==Other stuff==&lt;br /&gt;
* Byzantine fault tolerance&lt;br /&gt;
** byzantine fault tolerant network replicates the data in such a way that even if m nodes out of total n nodes,in a network,fail, you would still be able to recover the whole data. but as you increase the value of number m, the required network messages to be exchanges also increases, so there is a tradeoff.&lt;br /&gt;
** Assuming certain actors are malicious&lt;br /&gt;
* Bitcoin&lt;br /&gt;
** Trusted vs Untrusted.&lt;br /&gt;
** It is considered to be untrusted but it takes huge amount of trust when exchanges are made.&lt;br /&gt;
&lt;br /&gt;
==What&#039;s worth salvaging from the dream?==&lt;br /&gt;
* Using spare resources in other locations.&lt;br /&gt;
* Similar routing system are used in large peer to peer systems.&lt;br /&gt;
&lt;br /&gt;
==How to read a research paper==&lt;br /&gt;
* Start with Intro&lt;br /&gt;
** Figure out what the problem is&lt;br /&gt;
* then see the related work for context&lt;br /&gt;
* then go to conclusion. Focus on results.&lt;br /&gt;
* then fill in the gaps by reading specific parts of the body&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_14&amp;diff=18758</id>
		<title>DistOS 2014W Lecture 14</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_14&amp;diff=18758"/>
		<updated>2014-03-09T15:22:16Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* What is the dream? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=OceanStore=&lt;br /&gt;
&lt;br /&gt;
==What is the dream?==&lt;br /&gt;
The dream was to create a persistent storage system that had high availability and was universally accessibly--a global, ubiquitous persistent data storage solution. OceanStore was meant to be utility managed by multiple parties, with no one party having total control/monopoly over the system. To support the goal of high availability, there was a high amount of redundancy and fault-tolerance. For high persistence, everything was archived--nothing was ever truly deleted. This can be likened to working in version control with &amp;quot;Commits&amp;quot;. This is possibly due to the realization that the easier it is to delete things, the easier it is to lose things.&lt;br /&gt;
&lt;br /&gt;
The basic assumption made by the designers of OceanStore, however, was that none of the servers could be trusted. To support this, the system held only opaque/encrypted data. As such, the system could be used for more than files (e.g., for whole databases). &lt;br /&gt;
&lt;br /&gt;
The system utilized nomadic data, meaning that data could be cached anywhere, unlike with NFS and AFS where only specific servers can cache the data.&lt;br /&gt;
&lt;br /&gt;
==Why did the dream die?==&lt;br /&gt;
&lt;br /&gt;
* Biggest reason it died was it&#039;s assumption of mistrusting the actors.&lt;br /&gt;
** Everything else they did was right.&lt;br /&gt;
* Other successful distributed systems are built on a more trusted model.&lt;br /&gt;
* Complexity of the Ocean Store is too great.&lt;br /&gt;
** The solution is expensive.&lt;br /&gt;
&lt;br /&gt;
=== Technology ===&lt;br /&gt;
* The trust model is the most attractive feature which ultimately killed it.&lt;br /&gt;
** The untrusted assumption was a huge burden on the system. Forced technical limitations made them uncompetitive.&lt;br /&gt;
** It is just easier to trust a given system. More convenient.&lt;br /&gt;
** Every system is compromisable despite this mistrust&lt;br /&gt;
* Pub key system reduces usability&lt;br /&gt;
** If you loose your key, you&#039;re S.O.L.&lt;br /&gt;
** If you wanted to remove someone&#039; access over an object, you would have to re-encrypt the object with a new key and provide the key to user who wtill have access to object &lt;br /&gt;
*security&lt;br /&gt;
**there is no security mechanism in servers side.&lt;br /&gt;
**can not now who access the data&lt;br /&gt;
*economic side&lt;br /&gt;
**The economic model is unconvincing as defined.  The authors suggest that a collection of companies will host OceanStore servers, and consumers will buy capacity (not unlike web-hosting of today).&lt;br /&gt;
&lt;br /&gt;
===Use Cases===&lt;br /&gt;
* Subset of the features already exist&lt;br /&gt;
** Blackberry and Google offer similar services.&lt;br /&gt;
** These current services owned by one company, not many providers.&lt;br /&gt;
** Can not sell back your services as a user.&lt;br /&gt;
*** ex. Can not sell your extra storage back to the utility.&lt;br /&gt;
&lt;br /&gt;
==Pond: What insights?==&lt;br /&gt;
&lt;br /&gt;
* They actually built it.&lt;br /&gt;
* Can&#039;t assume the use of any infrastructure, so they rebuild everything!&lt;br /&gt;
** Built over the internet.&lt;br /&gt;
** Tapestry (routing).&lt;br /&gt;
** GUID for object indentification. Object naming scheme.&lt;br /&gt;
&lt;br /&gt;
==Benchmarks==&lt;br /&gt;
* Really good read speed, really bad write speed.&lt;br /&gt;
&lt;br /&gt;
===Storage overhead===&lt;br /&gt;
* How much are they increasing the storage needed to implement their storage model.&lt;br /&gt;
* Factor of 4.8x the space needed (you&#039;ll have 1/5th the  storage)&lt;br /&gt;
* Expensive, but good value (data is backed up, replicated, etc..)&lt;br /&gt;
* Considerations of importance before making an update&lt;br /&gt;
** burn more storage space as more updates are made&lt;br /&gt;
&lt;br /&gt;
===Update performance===&lt;br /&gt;
* No data is mutated. It is diffed and archived.&lt;br /&gt;
* Creating a new version of an object and distributing that object.&lt;br /&gt;
&lt;br /&gt;
===Benchmarks in a nutshell===&lt;br /&gt;
* Everything is expensive!&lt;br /&gt;
* High latency&lt;br /&gt;
&lt;br /&gt;
==Other stuff==&lt;br /&gt;
* Byzantine fault tolerance&lt;br /&gt;
** byzantine fault tolerant network replicates the data in such a way that even if m nodes out of total n nodes,in a network,fail, you would still be able to recover the whole data. but as you increase the value of number m, the required network messages to be exchanges also increases, so there is a tradeoff.&lt;br /&gt;
** Assuming certain actors are malicious&lt;br /&gt;
* Bitcoin&lt;br /&gt;
** Trusted vs Untrusted.&lt;br /&gt;
** It is considered to be untrusted but it takes huge amount of trust when exchanges are made.&lt;br /&gt;
&lt;br /&gt;
==What&#039;s worth salvaging from the dream?==&lt;br /&gt;
* Using spare resources in other locations.&lt;br /&gt;
* Similar routing system are used in large peer to peer systems.&lt;br /&gt;
&lt;br /&gt;
==How to read a research paper==&lt;br /&gt;
* Start with Intro&lt;br /&gt;
** Figure out what the problem is&lt;br /&gt;
* then see the related work for context&lt;br /&gt;
* then go to conclusion. Focus on results.&lt;br /&gt;
* then fill in the gaps by reading specific parts of the body&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_12&amp;diff=18625</id>
		<title>DistOS 2014W Lecture 12</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_12&amp;diff=18625"/>
		<updated>2014-02-13T23:13:45Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
Chubby, developed at Google, was designed to be a coarse-grained locking service for use within loosely coupled distributed systems (i.e., a network consisting of a high number of small machines). The key contribution was the implementation of Chubby (i.e., there were no new algorithms designed/introduced). &lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
The funny thing is that Chubby is essentially a filesystem (with files, file permissions, reading/writing, a hierarchal structure, etc.) with a few caveats. Mainly that any file can act as a reader/writer lock and that only whole file operations are performed (i.e., the whole file is written or read), as the files are quite small (256K max).&lt;br /&gt;
&lt;br /&gt;
All the locks are fully advisory, meaning others can &amp;quot;go around&amp;quot; whoever has the lock to access the resource (for reading and, sometimes, writing), as opposed to mandatory, mandatory locks giving completely exclusive access to a resource. It can be noted that Linux also utilizes advisory locks as opposed to Windows, which only utilizes mandatory locks. This could be noted as a shortcoming of Windows as, when anything changes regarding the system, the system must be completely rebooted as the mandatory locks on files end up being held too long. With advisory locks, as in Linux, the system need only be rebooted when the kernel is modified/updated.&lt;br /&gt;
&lt;br /&gt;
Chubby also functions as a name server, but only really for functional names/roles , such as for the mail server or a GFS server (i.e., Chubby is mainly used as a name server for logical/symbolic names for roles). As a name server, Chubby provides guarantees not given with DNS (e.g., DNS is subject to a stale cache) as Chubby provides a unified view of the way things are in the system. The name-value mappings in Chubby allow for a consistent, real-time, overall view of the entire system.&lt;br /&gt;
&lt;br /&gt;
Chubby was made coarse-grained for scalability as coarse-grained locks give the ability to create a distributed system while the fine-grained locks wouldn&#039;t scale well. It can also be noted that a fine-grained lock could be implemented on top of the coarse-grained locks. The entire point of Chubby was to give ultra-high availability and integrity. &lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
* paxos algorithm&lt;br /&gt;
&lt;br /&gt;
==use cases==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Discussion==&lt;br /&gt;
&lt;br /&gt;
Where else do we see things such as Chubby? Where would you want this consistent, overall view?&lt;br /&gt;
&lt;br /&gt;
You would want this consistent view in any sort of synchronized set of files across a set of systems, such as Dropbox. The main tenants of Chubby&#039;s design would hold where you would want to make sure there was an online consensus. It should be noted that this is not like version control as, with version control, everyone has their own copy which are all merged later. However, in this type of system, there is only one version available throughout the distributed system. Chubby&#039;s design would differ from Dropbox in that Dropbox is designed so that you can work offline and then synchronize your changes once you are online again (i.e., there can sometimes be more than one version of a file meaning you lack the consistent, overall view given by Chubby).&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18405</id>
		<title>DistOS 2014W Lecture 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18405"/>
		<updated>2014-01-16T16:10:51Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Graphics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Discussions on the Alto&lt;br /&gt;
&lt;br /&gt;
==CPU, Memory, Disk==&lt;br /&gt;
&lt;br /&gt;
====CPU====&lt;br /&gt;
&lt;br /&gt;
The general hardware architecture of the CPU was biased towards the user, meaning that a greater focus was put on IO capabilities and less focus was put on computational power (arithmetic etc). There were two levels of task-switching; the CPU provided sixteen fixed-priority tasks with hardware interrupts, each of which was permanently assigned to a piece of hardware. Only one of these tasks (the lowest-priority) was dedicated to the user. This task actually ran a virtualized BCPL machine (a C-like language); the user had no access at all to the underlying microcode. Other languages could be emulated as well.&lt;br /&gt;
&lt;br /&gt;
====Memory====&lt;br /&gt;
&lt;br /&gt;
The Alto started with 64K of 16-bit words of memory and eventually grew to 256K words. However, the higher memory was not accessible except through special tricks, similar to the way that memory above 4GB is not accessible today on 32-bit systems without special tricks.&lt;br /&gt;
&lt;br /&gt;
====Task Switching====&lt;br /&gt;
&lt;br /&gt;
One thing that was confusing was that they refer to tasks both as the 16 fixed hardware tasks and the many software tasks that could be multiplexed onto the lowest-priority of those hardware tasks. In either case, task switching was cooperative; until a task gave up control by running a specific instruction, no other task could run. From a modern perspective this looks like a major security problem, since malicious software could simply never relinquish the CPU. However, the fact that hardware was first-class in this sense (with full access to the CPU and memory) made the hardware simpler because much of the complexity could be done in software. Perhaps the first hints of what we now think of as drivers?&lt;br /&gt;
&lt;br /&gt;
====Disk and Filesystem====&lt;br /&gt;
&lt;br /&gt;
To make use of disk controller read,write,truncate,delete and etc. commands were made available.To reduce the risk of global damage structural information was saved to label in each page.hints mechanism was also a available using directory get where file resides in disk.file integrity a was check using seal bit and label.&lt;br /&gt;
&lt;br /&gt;
==Ethernet, Networking protocols==&lt;br /&gt;
&lt;br /&gt;
==Graphics, Mouse, Printing==&lt;br /&gt;
&lt;br /&gt;
===Graphics===&lt;br /&gt;
&lt;br /&gt;
A lot of time was spent on what paper and ink provides us in a display sense, constantly referencing an 8.5 by 11 piece of paper as the type of display they were striving for. This showed what they were attempting to emulate in the Alto&#039;s display. The authors proposed 500 - 1000 black or white bits per inch of display (i.e. 500 - 1000 dpi). However, they were unable to pursue this goal, instead settling for 70 dpi for the display, allowing them to show things such as 10 pt text. They state that a 30 Hz refresh rate was found to not be objectionable. Interestingly, however, we would find this objectionable today--most likely from being spoiled with the sheer speed of computers today, whereas the authors were used to slower performance. The Alto&#039;s display took up &#039;&#039;&#039;half&#039;&#039;&#039; the Alto&#039;s memory, a choice we found very interesting. &lt;br /&gt;
&lt;br /&gt;
Another interesting point was that the authors state that they thought it was beneficial that they could access display memory directly rather than using conventional frame buffer organizations. While we are unsure of what they meant by traditional frame buffer organizations, it is interesting to note that frame buffer organizations is what we use today for our displays.&lt;br /&gt;
&lt;br /&gt;
===Mouse===&lt;br /&gt;
&lt;br /&gt;
The mouse outlined in the paper was 200 dpi (vs. a standard mouse from Apple which is 1300 dpi) and had three buttons (one of the standard configurations of mice that are produced today). They were already using different mouse cursors (i.e., the pointer image of the cursor on screen). The real interesting point here is that the design outlined in the paper was so similar to designs we still use today. The only real divergence was the use of optical mice, although the introduction of optical mice did not altogether halt the use of non-optical mice. Today, we just have more flexibility with regards to how we design mice (e.g., having a scroll wheel, more buttons, etc.).&lt;br /&gt;
&lt;br /&gt;
===Printer===&lt;br /&gt;
&lt;br /&gt;
They state that the printer should print, in one second, an 8.5 by 11 inch page defined with 350 dots/inch (roughly 4000 horizontal scan lines of 3000 dots each). Ironically enough, this is not even what they had wanted for the actual Alto display. However, they did not have enough memory to do this and had to work around this by using things such as an incremental algorithm and reducing the number of scan lines. We were disappointed that they did not actually discuss the hardware implementation of the printer, only the software controller. However, it is interesting that the fact they are dividing the memory requirements of the printer between the hardware itself and the computer was quite a modern idea at the time, and still is.&lt;br /&gt;
&lt;br /&gt;
===Other Interesting Notes===&lt;br /&gt;
&lt;br /&gt;
We found it interesting that peripheral devices were included at all.&lt;br /&gt;
&lt;br /&gt;
The author makes a passing mention to having a tablet to draw on. However, he stated that no one really liked having the tablet as it got in the way of the keyboard.&lt;br /&gt;
&lt;br /&gt;
The recurring theme of lack of memory to implement what they had originally envisioned.&lt;br /&gt;
&lt;br /&gt;
==Applications, Programming Environment==&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18403</id>
		<title>DistOS 2014W Lecture 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18403"/>
		<updated>2014-01-16T16:00:01Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Printer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Discussions on the Alto&lt;br /&gt;
&lt;br /&gt;
==CPU, Memory, Disk==&lt;br /&gt;
&lt;br /&gt;
====CPU====&lt;br /&gt;
&lt;br /&gt;
The general hardware architecture of the CPU was biased towards the user, meaning that a greater focus was put on IO capabilities and less focus was put on computational power (arithmetic etc). There were two levels of task-switching; the CPU provided sixteen fixed-priority tasks with hardware interrupts, each of which was permanently assigned to a piece of hardware. Only one of these tasks (the lowest-priority) was dedicated to the user. This task actually ran a virtualized BCPL machine (a C-like language); the user had no access at all to the underlying microcode. Other languages could be emulated as well.&lt;br /&gt;
&lt;br /&gt;
====Memory====&lt;br /&gt;
&lt;br /&gt;
The Alto started with 64K of 16-bit words of memory and eventually grew to 256K words. However, the higher memory was not accessible except through special tricks, similar to the way that memory above 4GB is not accessible today on 32-bit systems without special tricks.&lt;br /&gt;
&lt;br /&gt;
====Task Switching====&lt;br /&gt;
&lt;br /&gt;
One thing that was confusing was that they refer to tasks both as the 16 fixed hardware tasks and the many software tasks that could be multiplexed onto the lowest-priority of those hardware tasks. In either case, task switching was cooperative; until a task gave up control by running a specific instruction, no other task could run. From a modern perspective this looks like a major security problem, since malicious software could simply never relinquish the CPU. However, the fact that hardware was first-class in this sense (with full access to the CPU and memory) made the hardware simpler because much of the complexity could be done in software. Perhaps the first hints of what we now think of as drivers?&lt;br /&gt;
&lt;br /&gt;
====Disk and Filesystem====&lt;br /&gt;
&lt;br /&gt;
==Ethernet, Networking protocols==&lt;br /&gt;
&lt;br /&gt;
==Graphics, Mouse, Printing==&lt;br /&gt;
&lt;br /&gt;
===Graphics===&lt;br /&gt;
&lt;br /&gt;
A lot of time was spent on what paper and ink provides us in a display sense, constantly referencing an 8.5 by 11 piece of paper as the type of display they were striving for. This showed what they were attempting to emulate in the Alto&#039;s display. The authors proposed 500 - 1000 black or white bits per inch of display (i.e. 500 - 100 dpi). However, they were unable to pursue this goal, instead settling for 70 dpi for the display, allowing them to show things such as 10 pt text. They state that a 30 Hz refresh rate was found to not be objectionable. Interestingly, however, we would find this objectionable today--most likely from being spoiled with the sheer speed of computers today, whereas the authors were used to slower performance. The Alto&#039;s display took up &#039;&#039;&#039;half&#039;&#039;&#039; the Alto&#039;s memory, a choice we found very interesting. &lt;br /&gt;
&lt;br /&gt;
Another interesting point was that the authors state that they thought it was beneficial that they could access display memory directly rather than using conventional frame buffer organizations. While we are unsure of what they meant by traditional frame buffer organizations, it is interesting to note that frame buffer organizations is what we use today for our displays.&lt;br /&gt;
&lt;br /&gt;
===Mouse===&lt;br /&gt;
&lt;br /&gt;
The mouse outlined in the paper was 200 dpi (vs. a standard mouse from Apple which is 1300 dpi) and had three buttons (one of the standard configurations of mice that are produced today). They were already using different mouse cursors (i.e., the pointer image of the cursor on screen). The real interesting point here is that the design outlined in the paper was so similar to designs we still use today. The only real divergence was the use of optical mice, although the introduction of optical mice did not altogether halt the use of non-optical mice. Today, we just have more flexibility with regards to how we design mice (e.g., having a scroll wheel, more buttons, etc.).&lt;br /&gt;
&lt;br /&gt;
===Printer===&lt;br /&gt;
&lt;br /&gt;
They state that the printer should print, in one second, an 8.5 by 11 inch page defined with 350 dots/inch (roughly 4000 horizontal scan lines of 3000 dots each). Ironically enough, this is not even what they had wanted for the actual Alto display. However, they did not have enough memory to do this and had to work around this by using things such as an incremental algorithm and reducing the number of scan lines. We were disappointed that they did not actually discuss the hardware implementation of the printer, only the software controller. However, it is interesting that the fact they are dividing the memory requirements of the printer between the hardware itself and the computer was quite a modern idea at the time, and still is.&lt;br /&gt;
&lt;br /&gt;
===Other Interesting Notes===&lt;br /&gt;
&lt;br /&gt;
We found it interesting that peripheral devices were included at all.&lt;br /&gt;
&lt;br /&gt;
The author makes a passing mention to having a tablet to draw on. However, he stated that no one really liked having the tablet as it got in the way of the keyboard.&lt;br /&gt;
&lt;br /&gt;
The recurring theme of lack of memory to implement what they had originally envisioned.&lt;br /&gt;
&lt;br /&gt;
==Applications, Programming Environment==&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18402</id>
		<title>DistOS 2014W Lecture 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18402"/>
		<updated>2014-01-16T15:58:37Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Other Interesting Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Discussions on the Alto&lt;br /&gt;
&lt;br /&gt;
==CPU, Memory, Disk==&lt;br /&gt;
&lt;br /&gt;
====CPU====&lt;br /&gt;
&lt;br /&gt;
The general hardware architecture of the CPU was biased towards the user, meaning that a greater focus was put on IO capabilities and less focus was put on computational power (arithmetic etc). There were two levels of task-switching; the CPU provided sixteen fixed-priority tasks with hardware interrupts, each of which was permanently assigned to a piece of hardware. Only one of these tasks (the lowest-priority) was dedicated to the user. This task actually ran a virtualized BCPL machine (a C-like language); the user had no access at all to the underlying microcode. Other languages could be emulated as well.&lt;br /&gt;
&lt;br /&gt;
====Memory====&lt;br /&gt;
&lt;br /&gt;
The Alto started with 64K of 16-bit words of memory and eventually grew to 256K words. However, the higher memory was not accessible except through special tricks, similar to the way that memory above 4GB is not accessible today on 32-bit systems without special tricks.&lt;br /&gt;
&lt;br /&gt;
====Task Switching====&lt;br /&gt;
&lt;br /&gt;
One thing that was confusing was that they refer to tasks both as the 16 fixed hardware tasks and the many software tasks that could be multiplexed onto the lowest-priority of those hardware tasks. In either case, task switching was cooperative; until a task gave up control by running a specific instruction, no other task could run. From a modern perspective this looks like a major security problem, since malicious software could simply never relinquish the CPU. However, the fact that hardware was first-class in this sense (with full access to the CPU and memory) made the hardware simpler because much of the complexity could be done in software. Perhaps the first hints of what we now think of as drivers?&lt;br /&gt;
&lt;br /&gt;
====Disk and Filesystem====&lt;br /&gt;
&lt;br /&gt;
==Ethernet, Networking protocols==&lt;br /&gt;
&lt;br /&gt;
==Graphics, Mouse, Printing==&lt;br /&gt;
&lt;br /&gt;
===Graphics===&lt;br /&gt;
&lt;br /&gt;
A lot of time was spent on what paper and ink provides us in a display sense, constantly referencing an 8.5 by 11 piece of paper as the type of display they were striving for. This showed what they were attempting to emulate in the Alto&#039;s display. The authors proposed 500 - 1000 black or white bits per inch of display (i.e. 500 - 100 dpi). However, they were unable to pursue this goal, instead settling for 70 dpi for the display, allowing them to show things such as 10 pt text. They state that a 30 Hz refresh rate was found to not be objectionable. Interestingly, however, we would find this objectionable today--most likely from being spoiled with the sheer speed of computers today, whereas the authors were used to slower performance. The Alto&#039;s display took up &#039;&#039;&#039;half&#039;&#039;&#039; the Alto&#039;s memory, a choice we found very interesting. &lt;br /&gt;
&lt;br /&gt;
Another interesting point was that the authors state that they thought it was beneficial that they could access display memory directly rather than using conventional frame buffer organizations. While we are unsure of what they meant by traditional frame buffer organizations, it is interesting to note that frame buffer organizations is what we use today for our displays.&lt;br /&gt;
&lt;br /&gt;
===Mouse===&lt;br /&gt;
&lt;br /&gt;
The mouse outlined in the paper was 200 dpi (vs. a standard mouse from Apple which is 1300 dpi) and had three buttons (one of the standard configurations of mice that are produced today). They were already using different mouse cursors (i.e., the pointer image of the cursor on screen). The real interesting point here is that the design outlined in the paper was so similar to designs we still use today. The only real divergence was the use of optical mice, although the introduction of optical mice did not altogether halt the use of non-optical mice. Today, we just have more flexibility with regards to how we design mice (e.g., having a scroll wheel, more buttons, etc.).&lt;br /&gt;
&lt;br /&gt;
===Printer===&lt;br /&gt;
&lt;br /&gt;
They state that the printer should print, in one second, an 8.5 by 11 inch page defined with 350 dots/inch (roughly 4000 horizontal scan lines of 3000 dots each). Ironically enough, this is not even what they had wanted for the actual Alto display. However, they did not have enough memory to do this and had to work around this by using things such as an incremental algorithm and reducing the number of scan lines.&lt;br /&gt;
&lt;br /&gt;
===Other Interesting Notes===&lt;br /&gt;
&lt;br /&gt;
We found it interesting that peripheral devices were included at all.&lt;br /&gt;
&lt;br /&gt;
The author makes a passing mention to having a tablet to draw on. However, he stated that no one really liked having the tablet as it got in the way of the keyboard.&lt;br /&gt;
&lt;br /&gt;
The recurring theme of lack of memory to implement what they had originally envisioned.&lt;br /&gt;
&lt;br /&gt;
==Applications, Programming Environment==&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18401</id>
		<title>DistOS 2014W Lecture 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18401"/>
		<updated>2014-01-16T15:58:16Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Printer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Discussions on the Alto&lt;br /&gt;
&lt;br /&gt;
==CPU, Memory, Disk==&lt;br /&gt;
&lt;br /&gt;
====CPU====&lt;br /&gt;
&lt;br /&gt;
The general hardware architecture of the CPU was biased towards the user, meaning that a greater focus was put on IO capabilities and less focus was put on computational power (arithmetic etc). There were two levels of task-switching; the CPU provided sixteen fixed-priority tasks with hardware interrupts, each of which was permanently assigned to a piece of hardware. Only one of these tasks (the lowest-priority) was dedicated to the user. This task actually ran a virtualized BCPL machine (a C-like language); the user had no access at all to the underlying microcode. Other languages could be emulated as well.&lt;br /&gt;
&lt;br /&gt;
====Memory====&lt;br /&gt;
&lt;br /&gt;
The Alto started with 64K of 16-bit words of memory and eventually grew to 256K words. However, the higher memory was not accessible except through special tricks, similar to the way that memory above 4GB is not accessible today on 32-bit systems without special tricks.&lt;br /&gt;
&lt;br /&gt;
====Task Switching====&lt;br /&gt;
&lt;br /&gt;
One thing that was confusing was that they refer to tasks both as the 16 fixed hardware tasks and the many software tasks that could be multiplexed onto the lowest-priority of those hardware tasks. In either case, task switching was cooperative; until a task gave up control by running a specific instruction, no other task could run. From a modern perspective this looks like a major security problem, since malicious software could simply never relinquish the CPU. However, the fact that hardware was first-class in this sense (with full access to the CPU and memory) made the hardware simpler because much of the complexity could be done in software. Perhaps the first hints of what we now think of as drivers?&lt;br /&gt;
&lt;br /&gt;
====Disk and Filesystem====&lt;br /&gt;
&lt;br /&gt;
==Ethernet, Networking protocols==&lt;br /&gt;
&lt;br /&gt;
==Graphics, Mouse, Printing==&lt;br /&gt;
&lt;br /&gt;
===Graphics===&lt;br /&gt;
&lt;br /&gt;
A lot of time was spent on what paper and ink provides us in a display sense, constantly referencing an 8.5 by 11 piece of paper as the type of display they were striving for. This showed what they were attempting to emulate in the Alto&#039;s display. The authors proposed 500 - 1000 black or white bits per inch of display (i.e. 500 - 100 dpi). However, they were unable to pursue this goal, instead settling for 70 dpi for the display, allowing them to show things such as 10 pt text. They state that a 30 Hz refresh rate was found to not be objectionable. Interestingly, however, we would find this objectionable today--most likely from being spoiled with the sheer speed of computers today, whereas the authors were used to slower performance. The Alto&#039;s display took up &#039;&#039;&#039;half&#039;&#039;&#039; the Alto&#039;s memory, a choice we found very interesting. &lt;br /&gt;
&lt;br /&gt;
Another interesting point was that the authors state that they thought it was beneficial that they could access display memory directly rather than using conventional frame buffer organizations. While we are unsure of what they meant by traditional frame buffer organizations, it is interesting to note that frame buffer organizations is what we use today for our displays.&lt;br /&gt;
&lt;br /&gt;
===Mouse===&lt;br /&gt;
&lt;br /&gt;
The mouse outlined in the paper was 200 dpi (vs. a standard mouse from Apple which is 1300 dpi) and had three buttons (one of the standard configurations of mice that are produced today). They were already using different mouse cursors (i.e., the pointer image of the cursor on screen). The real interesting point here is that the design outlined in the paper was so similar to designs we still use today. The only real divergence was the use of optical mice, although the introduction of optical mice did not altogether halt the use of non-optical mice. Today, we just have more flexibility with regards to how we design mice (e.g., having a scroll wheel, more buttons, etc.).&lt;br /&gt;
&lt;br /&gt;
===Printer===&lt;br /&gt;
&lt;br /&gt;
They state that the printer should print, in one second, an 8.5 by 11 inch page defined with 350 dots/inch (roughly 4000 horizontal scan lines of 3000 dots each). Ironically enough, this is not even what they had wanted for the actual Alto display. However, they did not have enough memory to do this and had to work around this by using things such as an incremental algorithm and reducing the number of scan lines.&lt;br /&gt;
&lt;br /&gt;
===Other Interesting Notes===&lt;br /&gt;
&lt;br /&gt;
We found it interesting that peripheral devices were included at all.&lt;br /&gt;
&lt;br /&gt;
The author makes a passing mention to having a tablet to draw on. However, he stated that no one really liked having the tablet as it got in the way of the keyboard.&lt;br /&gt;
&lt;br /&gt;
==Applications, Programming Environment==&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18398</id>
		<title>DistOS 2014W Lecture 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18398"/>
		<updated>2014-01-16T15:54:39Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Graphics, Mouse, Printing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Discussions on the Alto&lt;br /&gt;
&lt;br /&gt;
==CPU, Memory, Disk==&lt;br /&gt;
&lt;br /&gt;
The general hardware architecture of the CPU was biased towards the user, meaning that a greater focus was put on IO capabilities and less focus was put on computational power (arithmetic etc). There were two levels of task-switching; the CPU provided sixteen fixed-priority tasks with hardware interrupts, each of which was permanently assigned to a piece of hardware. Only one of these tasks (the lowest-priority) was dedicated to the user. This task actually ran a virtualized BCPL machine (a C-like language); the user had no access at all to the underlying microcode. Other languages could be emulated as well.&lt;br /&gt;
&lt;br /&gt;
The Alto started with 64K of 16-bit words of memory and eventually grew to 256K words. However, the higher memory was not accessible except through special tricks, similar to the way that memory above 4GB is not accessible today on 32-bit systems without special tricks.&lt;br /&gt;
&lt;br /&gt;
==Ethernet, Networking protocols==&lt;br /&gt;
&lt;br /&gt;
==Graphics, Mouse, Printing==&lt;br /&gt;
&lt;br /&gt;
===Graphics===&lt;br /&gt;
&lt;br /&gt;
A lot of time was spent on what paper and ink provides us in a display sense, constantly referencing an 8.5 by 11 piece of paper as the type of display they were striving for. This showed what they were attempting to emulate in the Alto&#039;s display. The authors proposed 500 - 1000 black or white bits per inch of display (i.e. 500 - 100 dpi). However, they were unable to pursue this goal, instead settling for 70 dpi for the display, allowing them to show things such as 10 pt text. They state that a 30 Hz refresh rate was found to not be objectionable. Interestingly, however, we would find this objectionable today--most likely from being spoiled with the sheer speed of computers today, whereas the authors were used to slower performance. The Alto&#039;s display took up &#039;&#039;&#039;half&#039;&#039;&#039; the Alto&#039;s memory, a choice we found very interesting. &lt;br /&gt;
&lt;br /&gt;
Another interesting point was that the authors state that they thought it was beneficial that they could access display memory directly rather than using conventional frame buffer organizations. While we are unsure of what they meant by traditional frame buffer organizations, it is interesting to note that frame buffer organizations is what we use today for our displays.&lt;br /&gt;
&lt;br /&gt;
===Mouse===&lt;br /&gt;
&lt;br /&gt;
The mouse outlined in the paper was 200 dpi (vs. a standard mouse from Apple which is 1300 dpi) and had three buttons (one of the standard configurations of mice that are produced today). They were already using different mouse cursors (i.e., the pointer image of the cursor on screen). The real interesting point here is that the design outlined in the paper was so similar to designs we still use today. The only real divergence was the use of optical mice, although the introduction of optical mice did not altogether halt the use of non-optical mice. Today, we just have more flexibility with regards to how we design mice (e.g., having a scroll wheel, more buttons, etc.).&lt;br /&gt;
&lt;br /&gt;
===Printer===&lt;br /&gt;
&lt;br /&gt;
===Other Interesting Notes===&lt;br /&gt;
&lt;br /&gt;
We found it interesting that peripheral devices were included at all.&lt;br /&gt;
&lt;br /&gt;
The author makes a passing mention to having a tablet to draw on. However, he stated that no one really liked having the tablet as it got in the way of the keyboard.&lt;br /&gt;
&lt;br /&gt;
==Applications, Programming Environment==&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18397</id>
		<title>DistOS 2014W Lecture 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18397"/>
		<updated>2014-01-16T15:53:49Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Graphics, Mouse, Printing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Discussions on the Alto&lt;br /&gt;
&lt;br /&gt;
==CPU, Memory, Disk==&lt;br /&gt;
&lt;br /&gt;
The general hardware architecture of the CPU was biased towards the user, meaning that a greater focus was put on IO capabilities and less focus was put on computational power (arithmetic etc). There were two levels of task-switching; the CPU provided sixteen fixed-priority tasks with hardware interrupts, each of which was permanently assigned to a piece of hardware. Only one of these tasks (the lowest-priority) was dedicated to the user. This task actually ran a virtualized BCPL machine (a C-like language); the user had no access at all to the underlying microcode. Other languages could be emulated as well.&lt;br /&gt;
&lt;br /&gt;
The Alto started with 64K of 16-bit words of memory and eventually grew to 256K words. However, the higher memory was not accessible except through special tricks, similar to the way that memory above 4GB is not accessible today on 32-bit systems without special tricks.&lt;br /&gt;
&lt;br /&gt;
==Ethernet, Networking protocols==&lt;br /&gt;
&lt;br /&gt;
==Graphics, Mouse, Printing==&lt;br /&gt;
&lt;br /&gt;
===Graphics===&lt;br /&gt;
&lt;br /&gt;
A lot of time was spent on what paper and ink provides us in a display sense, constantly referencing an 8.5 by 11 piece of paper as the type of display they were striving for. This showed what they were attempting to emulate in the Alto&#039;s display. The authors proposed 500 - 1000 black or white bits per inch of display (i.e. 500 - 100 dpi). However, they were unable to pursue this goal, instead settling for 70 dpi for the display, allowing them to show things such as 10 pt text. They state that a 30 Hz refresh rate was found to not be objectionable. Interestingly, however, we would find this objectionable today--most likely from being spoiled with the sheer speed of computers today, whereas the authors were used to slower performance. The Alto&#039;s display took up &#039;&#039;&#039;half&#039;&#039;&#039; the Alto&#039;s memory, a choice we found very interesting. &lt;br /&gt;
&lt;br /&gt;
Another interesting point was that the authors state that they thought it was beneficial that they could access display memory directly rather than using conventional frame buffer organizations. While we are unsure of what they meant by traditional frame buffer organizations, it is interesting to note that frame buffer organizations is what we use today for our displays.&lt;br /&gt;
&lt;br /&gt;
===Mouse===&lt;br /&gt;
&lt;br /&gt;
The mouse outlined in the paper was 200 dpi (vs. a standard mouse from Apple which is 1300 dpi) and had three buttons (one of the standard configurations of mice that are produced today). They were already using different mouse cursors (i.e., the pointer image of the cursor on screen). The real interesting point here is that the design outlined in the paper was so similar to designs we still use today. The only real divergence was the use of optical mice, although the introduction of optical mice did not altogether halt the use of non-optical mice. Today, we just have more flexibility with regards to how we design mice (e.g., having a scroll wheel, more buttons, etc.).&lt;br /&gt;
&lt;br /&gt;
===Other Interesting Notes===&lt;br /&gt;
&lt;br /&gt;
We found it interesting that peripheral devices were included at all.&lt;br /&gt;
&lt;br /&gt;
==Applications, Programming Environment==&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18396</id>
		<title>DistOS 2014W Lecture 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18396"/>
		<updated>2014-01-16T15:46:15Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Graphics, Mouse, Printing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Discussions on the Alto&lt;br /&gt;
&lt;br /&gt;
==CPU, Memory, Disk==&lt;br /&gt;
&lt;br /&gt;
The general hardware architecture of the CPU was biased towards the user, meaning that a greater focus was put on IO capabilities and less focus was put on computational power (arithmetic etc). There were two levels of task-switching; the CPU provided sixteen fixed-priority tasks with hardware interrupts, each of which was permanently assigned to a piece of hardware. Only one of these tasks (the lowest-priority) was dedicated to the user. This task actually ran a virtualized BCPL machine (a C-like language); the user had no access at all to the underlying microcode. Other languages could be emulated as well.&lt;br /&gt;
&lt;br /&gt;
The Alto started with 64K of 16-bit words of memory and eventually grew to 256K words. However, the higher memory was not accessible except through special tricks, similar to the way that memory above 4GB is not accessible today on 32-bit systems without special tricks.&lt;br /&gt;
&lt;br /&gt;
==Ethernet, Networking protocols==&lt;br /&gt;
&lt;br /&gt;
==Graphics, Mouse, Printing==&lt;br /&gt;
&lt;br /&gt;
===Graphics===&lt;br /&gt;
&lt;br /&gt;
A lot of time was spent on what paper and ink provides us in a display sense, constantly referencing an 8.5 by 11 piece of paper as the type of display they were striving for. This showed what they were attempting to emulate in the Alto&#039;s display. The authors proposed 500 - 1000 black or white bits per inch of display (i.e. 500 - 100 dpi). However, they were unable to pursue this goal, instead settling for 70 dpi for the display, allowing them to show things such as 10 pt text. They state that a 30 Hz refresh rate was found to not be objectionable. Interestingly, however, we would find this objectionable today--most likely from being spoiled with the sheer speed of computers today, whereas the authors were used to slower performance. The Alto&#039;s display took up &#039;&#039;&#039;half&#039;&#039;&#039; the Alto&#039;s memory, a choice we found very interesting. &lt;br /&gt;
&lt;br /&gt;
Another interesting point was that the authors state that they thought it was beneficial that they could access display memory directly rather than using conventional frame buffer organizations. While we are unsure of what they meant by ``traditional frame buffer organizations&#039;&#039;, it is interesting to note that frame buffer organizations is what we use today for our displays.&lt;br /&gt;
&lt;br /&gt;
==Applications, Programming Environment==&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_1&amp;diff=18393</id>
		<title>DistOS 2014W Lecture 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_1&amp;diff=18393"/>
		<updated>2014-01-14T19:49:16Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;What is an OS?&#039;&#039;&#039; An OS allows you to run on (slightly) different hardware. Here are some ideas of what and OS could mean and some functionalities and responsibilities that OSes include::&lt;br /&gt;
* A hardware abstraction such that hardware resources can be accessed by software&lt;br /&gt;
* Provides consistent execution environment, which hardware doesn&#039;t provide (ie. code written to interface -- think portable code)&lt;br /&gt;
* Manages I/O (such as user I/O, machine I/O i.e. network I/O, sensors, videos, etc.)&lt;br /&gt;
* Resource management through multiplexing and policy use&lt;br /&gt;
** Multiplexing (sharing): one resource wanted by multiple users&lt;br /&gt;
* Communication infrastructure (example Inter Process Communication mechanisms) between the users (process, applications) of the Operating System.&lt;br /&gt;
* OS turns a computer you want to a computer you want to program&lt;br /&gt;
* Manages synchronization and concurrency issues&lt;br /&gt;
&lt;br /&gt;
An OS can be defined by the role it plays in the programming of systems. It takes care of resource management and creates abstraction. An OS turns hardware into the computer/api/interface you WANT to program.&lt;br /&gt;
&lt;br /&gt;
This is similar to how the browser is becoming the OS of the web. The browser is&lt;br /&gt;
the key abstraction needed to run web apps. It is the interface web developers target.&lt;br /&gt;
It doesn&#039;t matter what you consume a given website on (eg. a phone, tablet,&lt;br /&gt;
etc.), the browser abstracts the device&#039;s hardware and OS away.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;So, what&#039;s a distributed OS?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Anil prefers to think of this &#039;logically&#039; than functionally/physically.  This is&lt;br /&gt;
because the old distributed operating system (DOS) model applies to today&#039;s systems&lt;br /&gt;
(ie. managing multiple cores, etc). The traditional definition is systems that&lt;br /&gt;
manage their resources over a Network.&lt;br /&gt;
&lt;br /&gt;
A lot of these definitions are hard to peg down because simplicity always gets in&lt;br /&gt;
the way of truth. These concepts to do not fit into well defined classes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Anil&#039;s definition&#039;&#039;&#039; (following up on the similar note on what a traditional OS is, as seen above): &amp;quot;taking the distributed pieces of a system you have and&lt;br /&gt;
turning it into the system you WANT.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It is good to think about about DOS&#039;s within the context of who/what is in&lt;br /&gt;
control, in terms of who makes and enforces decisions in DOS. In essence, who is in charge? The traditional kernel-process model is a dictatorship. Authoritarian&lt;br /&gt;
model of control. The kernel controls what lives or dies.  The internet, by&lt;br /&gt;
contrast, is decentralised (eg. DNS). Distributed systems may have distributed&lt;br /&gt;
policies where there is not one source of power. Even in DOS paradigm we can see instances of authoritarian/centralized approaches one example being the walled garden model employed by Apple iOS. Anil&#039;s observation is that centralized systems has an inherent fragility built into and these kind of systems come into existence and disappear after a while. Examples being AOL, Myspace. Even the Facebook also looks to be a possible candidate for a similar fate. Also, concentrations on policy will tend to fall apart in the future.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_Privatix&amp;diff=16509</id>
		<title>COMP 3000 2011 Report: Privatix</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_Privatix&amp;diff=16509"/>
		<updated>2011-12-19T22:54:17Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Initialization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Part 1=&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The name of our chosen distribution is the Privatix Live-System. The target audience for this system are people that are concerned about privacy, anonymity and security when web-surfing, transporting/editing sensitive data, sending email etc. Therefore, the goals of this distribution are mainly security and privacy related which means being able to provide security-conscious tools and applications integrated into a portable Operating System (OS) for anyone to use at any time. The distribution is meant to be portable, coming in the form of a live Compact Diak (CD) which can be installed on an external device or a Universal Serial Bus (USB) flash drive with an encrypted password to ensure that all your data remains private, even if your external device is lost or compromised. It should be noted that the live CD is only meant for installing the OS onto a USB in order to provide a portable, privacy conscious OS. The user should not solely rely on the live CD as the OS does not yet implement password protection. This is due to the fact that there are no user accounts on the live CD, user accounts are only implemented once the full OS is installed on a USB. The Privatix Live-System incorporates many security-conscious tools for safe editing, carrying sensitive data, encrypted communication and anonymous web surfing such as built in software to encrypt external devices, IceWeasel and TOR. &amp;lt;ref name=&amp;quot;privatix home&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/index.html.en Privatix home page](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This Privatix Live-System was developed in Germany by Markus Mandalka. It may be obtained by going to Markus Mandalka&#039;s website and navigating to the download page ([http://www.mandalka.name/privatix/download.html.en Mandalka]), selecting the version you wish to download (we chose the English version) and downloading it. The approximate size of the Privatix Live-System is 838 Megabytes (MB) for the full English version (there are smaller versions available which have had features such as GNOME removed).&amp;lt;ref name=&amp;quot;Privatix download page&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/download.html.en Privatix download page](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;. The Privatix Live-System was based off of Debian ([http://distrowatch.com/table.php?distribution=debian Debian]).&lt;br /&gt;
&lt;br /&gt;
==Installation/Startup==&lt;br /&gt;
[[File:PrivatixBoot.png|thumb|right|Privatix boot screen]]&lt;br /&gt;
[[File:PrivatixDesktop.png|thumb|right|Privatix desktop]]&lt;br /&gt;
Currently we have Privatix installed on an 8 Gigabyte (GB) USB stick in order to utilize the full power of the OS. However, Privatix can be used in a few ways other than installing it on an external device such as on a live CD/Digital Video Disk (DVD), or in a virtualized environment such as VirtualBox. It should be noted, however, that the full potential of the OS is only unlocked once the OS has been installed on an external device as it was meant to be. One main flaw in using either a virtual environment or the live CD is that user accounts, and hence password protection, are not implemented until the OS has been installed on an external device.&lt;br /&gt;
&lt;br /&gt;
To install the Privatix-Live System, the user must first download the .iso from the download page ([http://www.mandalka.name/privatix/download.html.en Mandalka]). Once the .iso file is downloaded, it is possible to either burn the operating system to a CD/DVD, use VirtualBox, or install it to a USB stick. &lt;br /&gt;
 &lt;br /&gt;
===CD/DVD===&lt;br /&gt;
	To install and boot Privatix with a CD/DVD, simply burn the operating system to a disk and boot from the CD/DVD created when prompted to in the BIOS.  While using the Live CD, the user will have access to almost all features of the operating system.  However, because no profiles were setup, if the user locks the computer, there will be no way to unlock it as no password was not setup.  Note that the main purpose of the live CD/DVD is to install the OS on an external device.&lt;br /&gt;
&lt;br /&gt;
===VirtualBox===&lt;br /&gt;
	Using VirtualBox requires simply having VirtualBox installed, and when prompted for the installation media to select the .iso file downloaded for Privatix.  &lt;br /&gt;
&lt;br /&gt;
When the system starts up select the Live option.  This brings up the main Desktop, while using VirtualBox the user will have access to all features available when using Privatix with the Live CD.  However there is one small extra level of security, this is provided by the host operating system.  this extra layer of security takes the form of the profile system of the host operating system.&lt;br /&gt;
&lt;br /&gt;
===USB===&lt;br /&gt;
	To install Privatix onto a USB stick, you first must be booted into Privatix Live through a CD/DVD.  Then you need to click the install icon on the Desktop to begin installing to a device.  It is then possible to select a device for Privatix to install itself on.  The installer will ask you if you would like to fill your device with blank data, this makes accessing data/recovering what was originally on the device much harder.  The installer will prompt you for a user password, as well as an admin password.  The installer will then start it&#039;s time consuming process of installing Privatix to the device.&lt;br /&gt;
&lt;br /&gt;
To boot into Privatix from the device, you can stop the computer booting, and then boot into the external device.  During booting, Privatix will prompt you for your the password set up during the installation.&lt;br /&gt;
&lt;br /&gt;
==Basic Operation==&lt;br /&gt;
&lt;br /&gt;
===On An External Device===&lt;br /&gt;
&lt;br /&gt;
The main way of utilizing the Privatix Live-System is done by installing the system on an external device.  In our case, we used an 8 GB USB stick. When the system is installed on an external device, it is easy to use the system for its intended purpose--having portable anonymous and secure system. We tested this portable version of the system on several laptops with no trouble and no noticeable discretion in use between the different machines. We attempted to use the the system for the following use cases: anonymous web browsing, secure email, data encryption and secure data transportation. &lt;br /&gt;
&lt;br /&gt;
Apart from this, Privatix also came with OpenOffice applications for editing all types of data and much of the basic GNOME functionality, including (but not limited to):&lt;br /&gt;
* Pidgin IM and Empathy IM Client for instant messaging&lt;br /&gt;
* Evolution Mail for sending and retrieving email&lt;br /&gt;
* gedit for text editing&lt;br /&gt;
&lt;br /&gt;
====Anonymous Web Browsing====&lt;br /&gt;
[[File:TOR.png|thumb|right|TOR is enabled by default in Privatix]]&lt;br /&gt;
The main thing we liked about this system was the secure and anonymous web browsing. The default browser in the system is IceWeasal (an older version of GNU IceCat--a re-branding of FireFox compatible with both Linux and Mac systems) which comes equipped with security features not available by default in FireFox. The main add on that I liked was that The Onion Router (TOR) is installed and enabled by default (it can be disabled if the user wishes). TOR is an open source project meant to provide absolute anonymity online--mainly preventing anyone from learning your location or browsing habits--by routing webpage requests through virtual tunnels made up of individual TOR nodes. Since no two &amp;quot;paths&amp;quot; for a request are ever the same there is no way for your traffic to be monitored. &amp;lt;ref name=&amp;quot;TOR Project - About&amp;quot;&amp;gt;[https://www.torproject.org/about/overview.html.en TOR Project - About](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Secure Email====&lt;br /&gt;
&lt;br /&gt;
The Privatix Live-System also came equipped with the security-conscious email client IceDove--an unbranded ThunderBird mail client (a cross-platform email client that provides government-grade security features). The email client was easily setup and used, supporting digital signing and message encryption via certificates by default (as with TOR, this could be disabled if the user wished). &amp;lt;ref name=&amp;quot;icedove&amp;quot;&amp;gt;[http://packages.debian.org/sid/icedove IceDove](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Encryption====&lt;br /&gt;
[[File:Encrypt.jpg|thumb|right|Software to encrypt external device]]&lt;br /&gt;
The Privatix Live-System also has the ability to encrypt external devices (besides the external device that the system is installed on). This meant that we could have an unlimited amount of encrypted data, not being limited to the size of the external device that the system itself is installed on. The ability to encrypt secondary external devices is very handy as much of the space on the external device that Privatix is installed on is taken up by the system itself, especially if one fills the device with blank decoy data on installation. The encryption software was easily used, well designed and was able to be utilized by absolute beginners of the system.&lt;br /&gt;
&lt;br /&gt;
====Secure Data Transportation====&lt;br /&gt;
There are two ways that Privatix fulfills its secure data transportation goal:&lt;br /&gt;
# When saving data on the external device with the Privatix Live-System, the data is automatically encrypted and is also password protected (since the portable version of Privatix requires a password to use it). &amp;lt;ref name=&amp;quot;Privatix FAQ&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/index.html.en Privatix FAQ](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
# As mentioned above, Privatix allows for the encryption of secondary external devices, hence meaning that data can be securely transported without even having the Privatix Live-System with you.&lt;br /&gt;
&lt;br /&gt;
====General Use====&lt;br /&gt;
&lt;br /&gt;
Even with the additional security features not available in other distributions, Privatix would still be a very desirable live system to use. It is portable, especially once installed on an external device, and easily used with little bloatware. The default applications such as OpenOffice for data editing, Pidgin for instant messaging, various graphics editors, video player, and CD burner/extractor ensured that they system was still perfectly functional for everyday use, even with security, not intense functionality, being the main focus.&lt;br /&gt;
&lt;br /&gt;
===Live CD and Virtual Box===&lt;br /&gt;
&lt;br /&gt;
We found that running Privatix using the live CD and VirtualBox was equivalent. &lt;br /&gt;
&lt;br /&gt;
When booting the live CD in VirtualBox, there are certain key features of the Privatix Live-System you are missing (mainly because these features are meant for the portable version to be installed on an external device). However, just booting from the live CD still gives a lot of the functionality I would use the system for--mainly the anonymous web browsing, secure email and data encryption. The key differences were the lack of portability and the inability to save any data on the live CD or VirtualBox environment.&lt;br /&gt;
&lt;br /&gt;
When using only the live CD or VirtualBox all files are deleted when the system is shut down. In addition to, any files saved to the desktop by the user will not appear.  They will be hidden from view, but can be viewed by opening the terminal and navigating to the desktop and running the ls command.&lt;br /&gt;
&lt;br /&gt;
The main flaw in only using the system in these mediums is that the added protection of having a user account and password to access the system are no present on the live CD or when using Privatix in a virtual machine. This is due to the fact that Privatix does not implement user accounts and password protection until it has been fully installed onto an external device. As the main goal of this distribution is privacy, it would be highly recommended that the user fully install the OS onto an external device for the added security of password protection.&lt;br /&gt;
&lt;br /&gt;
==Usage Evaluation==&lt;br /&gt;
&lt;br /&gt;
During our use of Privatix, we found it preformed on par for what it was described as, a secure and portable system. The tools provided to encrypt data and the secure browser with add-ons for anonymity especially supported this belief. However we also found some parts of the distribution that were a cause for concern.  To begin with, there was a slight language barrier as the system was originally written in German. This was made apparent by the frequent grammar mistakes in both the existing English documentation and the operating system itself indicating that English was not the primary language for the writers of this operating system. Most of the documentation for the operating system is also in German. Those who maintain Privatix and its project website are in the process of translating all their documentation as to be available in both English and German, though currently most of the supporting documentation and FAQ are in German. This made it hard to troubleshoot anything that went wrong with the system during installation or use. &lt;br /&gt;
&lt;br /&gt;
We also noticed that there was no wireless drivers on either portable versions of the OS (installed on an external device or simply using the Live CD or booting up in a virtual machine) so wireless networks could not be connected to.  This causes a problem because an operating system on a USB stick should be completely portable, however this driver requires you to have a hard line to use the Internet. It was also noticed that when using Privatix in VirtualBox that even though there was no wireless drivers in Privatix, the wireless capability was provided by the host OS (Windows).&lt;br /&gt;
&lt;br /&gt;
Lastly, when we tried to install Privatix onto a USB it took several attempts. We discovered that to avoid many of the problems we encountered, it is better to use a larger (preferably at least 8GBs) external device for installation and to defer from filling the external device with blank decoy data during installation on an external device.&lt;br /&gt;
&lt;br /&gt;
However once connected to the Internet, all software seems to work as it should. The more basic applications such as OpenOffice, the instant messaging and email clients, multimedia applications etc. function with no problems encountered, working much as they do in any other Linux distribution. The security tools also seem to work as they should. However since we do not know how to test the limits of its security measures we do not know for sure how secure these programs actually are. Overall, Privatix seems to be a very functional and portable distribution, allowing users access to standard applications for tasks such as editing and transporting data, sending/receiving email, instant messaging and multimedia applications with the added bonus of being completely secure and anonymous.&lt;br /&gt;
&lt;br /&gt;
=Part 2=&lt;br /&gt;
==Software Packaging==&lt;br /&gt;
[[File:dpkg_out.png|thumb|right|Package listing, dpkg]]&lt;br /&gt;
[[File:aptitude_out.png|thumb|right|Package listing, aptitude]]&lt;br /&gt;
The packaging format that was used for the Privatix-Live System was DEB (based on the Debian packaging format). &amp;lt;ref name=&amp;quot;privatix distrowatch&amp;quot;&amp;gt;[http://distrowatch.com/table.php?distribution=privatix Privatix Distrowatch Page](Last accessed 12-18-11)&amp;lt;/ref&amp;gt; The utilities used with this packaging format were dpkg and aptitude. Dpkg is used as the operating system&#039;s package management utility, with aptitude acting as the more user-friendly front end version. Aptitude made finding a list of installed packages quite easy. Aptitude allows you to see a full list of installed packages, with the packages being segregated into categories such as mail, web, shells and utils. As well as using aptitude, the command line can be used to access a list of installed packages. To do this, input the following in terminal and a list of all installed packages is generated. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -l&lt;br /&gt;
&lt;br /&gt;
Though knowing how to do this in command line is useful, we found that using aptitude was generally better as the packages are segregated into categories which made viewing the list of installed packages more simple. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To add a package within Privatix, we found the easiest way was to use the one of the following commands provided by dpkg:&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -i &amp;lt;package name&amp;gt;&lt;br /&gt;
          or &lt;br /&gt;
 $ dpkg --install &amp;lt;package name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These commands function in the same way which means they will either install a package, or upgrade already installed versions of the package. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To remove a  package within Privatix, we found the easist way was to use either of the following commands provided by dpkg:&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -r &amp;lt;package name&amp;gt;&lt;br /&gt;
          or &lt;br /&gt;
 $ dpkg -P &amp;lt;package name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When using &amp;quot;dpkg -r &amp;lt;package name&amp;gt;&amp;quot;, everything related to the package &#039;&#039;except&#039;&#039; the configuration files are removed. To fully remove a package, however, we used &amp;quot;dpkg -P &amp;lt;package name&amp;gt;&amp;quot; which removes the entire package, including the configuration files. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We found that the software catalog for this distribution was quite extensive, especially since this distribution is meant to be portable. Privatix includes all the standard packages included with Debian (e.g. libc), as well as several other utilities meant to increase security and privacy while using the system such as IceDove, TOR and TORButton.&lt;br /&gt;
&lt;br /&gt;
==Major Package Versions==&lt;br /&gt;
&lt;br /&gt;
For this section of the report, we needed to speculate as to why a certain &#039;&#039;version&#039;&#039; of a package was included, not just the package itself. We determined, with the release date of Privatix &amp;lt;ref name=&amp;quot;privatix distrowatch&amp;quot;&amp;gt;[http://distrowatch.com/table.php?distribution=privatix Privatix on Distrowatch](Last accessed 12-19-11)&amp;lt;/ref&amp;gt;, that the usual reasoning behind a certain version of a package being included was that the included version was the stable release of the package at that time. We also needed to determine how heavily modified by the distribution&#039;s author packages included within our distribution were. However, the distribution&#039;s author has stated that everything included is mainly based on Debian. &amp;lt;ref name=&amp;quot;privatix documentation&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/doc.html Privatix Documentation (German)](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The packages within Privatix have not been modified, the distribution&#039;s author has mainly brought together several security and privacy conscious utilities into one distribution for portable and daily use. As such, many of the packages that come with the standard install of Privatix have been included since they are included with the standard install of Debian at the time this distribution was made. Please also note that this reference was taken from the main page of the distribution but that, to view it, you will need to translate it (we used Google translate) as much of the documentation for this distribution is in German. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr valign=&amp;quot;top&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;10%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Category&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;15%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Package&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;10%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Version&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;15%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Upstream Source&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;20%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Vintage&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;30%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Package Details&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;Kernel&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt;linux-base&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;2.6.32-31 &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;None Provided&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
		This version of the kernel was released in December of 2009, making it just under two years old. &amp;lt;ref name=&amp;quot;linux kernel&amp;quot;&amp;gt;[http://kernelnewbies.org/Linux_2_6_32 Linux Kernel v2.6.32 Info Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version of the Linux kernel was released just yesterday (11/11/2011), this version being listed as 3.1.1. &amp;lt;ref name=&amp;quot;current kernel&amp;quot;&amp;gt;[http://www.kernel.org/ Current Stable Linux Kernel](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This puts the version of the Linux kernel on Privatix as being two years behind the current stable version of the Linux kernel.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
		We believe that these packages were included within the distribution as they are the standard packages for the Linux kernel included in the standard install of Debian at the time Privatix was released.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;linux-image-2.6.32-5-686&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;2.6.32-31&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;linux-image-2.6-282&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;2.6.32+39&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;libc&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;libc-bin&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;2.11.2-10&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;http://www.eglibc.org&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
		This version of libc was released in January 2011, making it approximately 11 months old.&amp;lt;ref name=&amp;quot;eglibc&amp;quot;&amp;gt;[http://packages.qa.debian.org/e/eglibc.html eglibc Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This version is also the current stable version of libc, as listed on Debian. &amp;lt;ref name=&amp;quot;eglibc&amp;quot;&amp;gt;[http://packages.qa.debian.org/e/eglibc.html eglibc Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, a newer, unstable version (version 2.13-21), is currently undergoing testing.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;This package was included as it is also included in the standard install of Debian as well as that all Linux-based systems come with a version of libc.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;libc6&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Shell&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;bash&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;4.1-3&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://tiswww.case.edu/php/chet/bash/bashtop.html&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This version of bash was released in, approximately, April 2010. &amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; It is also the current stable version of bash. &amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, last month, version 4.2 of bash was pushed into testing and became the current experimental version.&amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This package was included as bash is the version of command line included with the standard install of Debian.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Utilities&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;busybox&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;1:1.17.1-8&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://www.qtsoftware.com/&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This version of busybox was released in, approximately, November 2010. &amp;lt;ref name=&amp;quot;busybox&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/busybox.html busybox Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; It is also the current stable version of busybox as listed on Debian. &amp;lt;ref name=&amp;quot;busybox&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/busybox.html busybox Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This package was included as it the version of busybox included with the standard install of Debian.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
   &lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Software Packaging&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;dpkg&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;1.15.8.10&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://wiki.debian.org/Teams/Dpkg&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of dpkg was released in February of 2011, making it 10 months old. &amp;lt;ref name=&amp;quot;dpkg changelog&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/dpkg/1.15.8.10ubuntu1 Dpkg Changelog](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The current stable version of dpkg, as listed on Debian, is version 1.15.8.11 which was released in April of 2011. &amp;lt;ref name=&amp;quot;dpkg&amp;quot;&amp;gt;[http://packages.qa.debian.org/d/dpkg.html Dpkg Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of dpkg included with Privatix at 3 months behind the latest stable version.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This package was included since dpkg is the package management system of Debian, the distribution that Privatix is based off of.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;Web Browser&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;IceWeasel&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;3.5.16-6&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;None Provided&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of IceWeasel was released in March 2011, making it 9 months old. &amp;lt;ref name=&amp;quot;iceweasel&amp;quot;&amp;gt;[http://packages.qa.debian.org/i/iceweasel.html IceWeasel Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version is version 3.5.16-11 which was released in November 2011. &amp;lt;ref name=&amp;quot;iceweasel&amp;quot;&amp;gt;[http://packages.qa.debian.org/i/iceweasel.html IceWeasel Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of IceWeasel included with Privatix at 9 months behind the latest stable release.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		IceWeasel was included within this distribution as it is a more security conscious browser than more mainstream browsers such as Mozilla Firefox. IceWeasel, an older version of GNU IceCat (a rebranding of FireFox), comes equipped with security features not available by default in FireFox.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Tor&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;0.201029-1&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;https://www.torproject.org&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This version of TOR was released in January 2011, making it 11 months old. &amp;lt;ref name=&amp;quot;tor changelog&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/tor/0.2.1.29-1 TOR Changelog](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The latest stable release of TOR is version 0.2.1.30-1 which was released in July 2011. &amp;lt;ref name=&amp;quot;tor&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/tor TOR on LaunchPad](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This package was included to help increase security, anonymity and privacy while web browsing which is one of the main goals of the Privatix distribution. For more information on TOR, see the Basic Operation section of the report, under Anonymous Web Browsing. &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;TOR Button (xul-ext-torbutton)&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;1.2.5-3&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;https://www.torproject.org/torbutton/&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This version of TOR Button was released in October 2010, making it just over a year old.&amp;lt;ref name=&amp;quot;torbutton&amp;quot;&amp;gt;[https://www.torproject.org/torbutton/ TORButton](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version of this program is version 1.4.4.1 which was released last month. &amp;lt;ref name=&amp;quot;torbutton&amp;quot;&amp;gt;[https://www.torproject.org/torbutton/ TORButton](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of TORButton included with Privatix at about a year behind the latest stable release.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This package was included in order to add to the functionality of TOR. This add-on allows the user to enable and disable TOR with the push of a button, located in the corner of their browser. &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Email&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;icedove&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;3.0.11-1+s&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;None Provided&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of IceDove is the current stable version as listed on Debian. &amp;lt;ref name=&amp;quot;icedove debian&amp;quot;&amp;gt;[http://packages.qadebian.org/i/icedove.html IceDove Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, this version was included within Privatix before it was made stable.  It was released as an unstable version in December 2010 and was later released as the current stable version in October 2011. &amp;lt;ref name=&amp;quot;icedove debian&amp;quot;&amp;gt;[http://packages.qadebian.org/i/icedove.html IceDove Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This email client was included due to the fact that it is a more security-conscious email client, providing government-grade security features, than others such as the regular version of ThunderBird. For more information on this program, refer to the Basic Operation section of this report under Secure Email.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Other&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;pidgin&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;2.7.3.1+sq&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://www.pidgin.im&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of the Pidgin was released in October 2010, and is also the current stable version of Pidgin as listed on Debian. &amp;lt;ref name=&amp;quot;pidgin&amp;quot;&amp;gt;[http://packages.qa.debian.org/p/pidgin.html Pidgin Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This package was included as Pidgin is the default IM client included with the standard install of Debian, the system on which Privatix is based.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Initialization==&lt;br /&gt;
&lt;br /&gt;
Privatix generally follows the same initialization process as Debian. Privatix initializes by first executing first the BIOS then the boot loader code. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; Privatix uses the same boot loader as Debian which is System V initialization. /etc/inittab is the configuration file, with the /sbin/init program initializing the system following the description in this configuration file. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; inittab will set the default run level of Privatix, which is run level 2. Following this, all the scripts located in /etc/rc2.d are executed alphabetically. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; These scripts are:&lt;br /&gt;
&lt;br /&gt;
[[File:pstree1.png|thumb|right|Process tree after start up]]&lt;br /&gt;
[[File:pstree_.png|thumb|right|Process tree after start up cont.]]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;S01polipo&#039;&#039;&#039;: polipo web cache--a small and fast caching web proxy&lt;br /&gt;
* &#039;&#039;&#039;S01rsyslog&#039;&#039;&#039;: enhanced multi-thread syslogd which is Linux system logging utility&lt;br /&gt;
* &#039;&#039;&#039;S01sudo&#039;&#039;&#039;: provides sudo&lt;br /&gt;
* &#039;&#039;&#039;S02cron&#039;&#039;&#039;: starts the scheduler of the system&lt;br /&gt;
* &#039;&#039;&#039;S02dbus&#039;&#039;&#039;: utility to send messages between processes and applications&lt;br /&gt;
* &#039;&#039;&#039;S02rsync&#039;&#039;&#039;: opens rsync--a program that allows files to be copied to and from remote machines&lt;br /&gt;
* &#039;&#039;&#039;S02tor&#039;&#039;&#039;: starts TOR (for more information, see above)&lt;br /&gt;
* &#039;&#039;&#039;S03avahi-daemon&#039;&#039;&#039;: starts the zeroconf daemon which is used for configuring the network automatically&lt;br /&gt;
* &#039;&#039;&#039;S03bluetooth&#039;&#039;&#039;: launches bluetooth&lt;br /&gt;
* &#039;&#039;&#039;S03networ-manager&#039;&#039;&#039;: starts a daemon that automatically switches network connections to the best available connection&lt;br /&gt;
* &#039;&#039;&#039;S04openvpn&#039;&#039;&#039;: starts openvpn service--a generic vpn service&lt;br /&gt;
* &#039;&#039;&#039;S05gdm3&#039;&#039;&#039;: script for the GNOME display manager&lt;br /&gt;
* &#039;&#039;&#039;S06bootlogs&#039;&#039;&#039;: the log file handling to be done during bootup--mainly things that don&#039;t need to be done particularly early in the boot process&lt;br /&gt;
* &#039;&#039;&#039;S07rc.local&#039;&#039;&#039;: runs the /etc/rc.local file if it exists--by default this script does nothing, it is used only to exit&lt;br /&gt;
* &#039;&#039;&#039;S07rmologin&#039;&#039;&#039;: removes the /etc/nologin file as the last step in the boot process&lt;br /&gt;
* &#039;&#039;&#039;S07stop-bootlogd&#039;&#039;&#039;: runs the /etc/rc.local file again, if it exists--by default this script does nothing, it is used only to exit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Following this, the system is initialized. The processes running on the newly initialized system and what initializes them are as follows: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot;&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;20%&amp;quot;&amp;gt; &amp;lt;b&amp;gt;Process Name&amp;lt;/b&amp;gt; &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;60%&amp;quot;&amp;gt; &amp;lt;b&amp;gt;Description&amp;lt;/b&amp;gt; &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;20%&amp;quot;&amp;gt; &amp;lt;b&amp;gt;Initialized By&amp;lt;/b&amp;gt; &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; NetworkManager &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Daemon that automatically switches network connections to the best available connection  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S03networ-manager init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; avahi-daemon &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; zeroconf daemon which is used for configuring the network automatically  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S03avahi-daemon init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; bluetoothd &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Enables bluetooth to be used &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S03bluetooth init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; cron &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Scheduler of Debian systems  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S02cron init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; dbus-launch &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Utility to send messages between processes and applications  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S02dbus init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gdm3 &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td rowspan=&amp;quot;4&amp;quot;&amp;gt; GNOME display manager &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td rowspan=&amp;quot;4&amp;quot;&amp;gt; S05gdm3 init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gnome-screensav &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gnome-settings- &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gnome-terminal &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; polipo &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; polipo web cache--a small and fast caching web proxy &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S01polipo init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; tor &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; TOR is an open source project meant to provide  anonymity online--mainly preventing anyone from learning your location or browsing habits--by routing webpage requests through virtual tunnels made up of individual TOR nodes. Since no two &amp;quot;paths&amp;quot; for a request are ever the same there is no way for your traffic to be monitored. &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S02tor init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We found this information by first confirming that Privatix used the same style of initializing as Debian. Once we ascertained this, we researched the Debian boot process. Privatix followed the same steps up until the loading of the scripts, which had some scripts that differed from Debian (e.g. TOR). Following this, we researched each of the scripts run on Privatix&#039;s default boot level of 2. The scripts are listed above, in the order they execute. To find the purpose of each of the scripts and what programs they opened, we manually went through each of the scripts. To find the processes running on the newly initialized system, we used the command &amp;quot;ps tree&amp;quot;. Once we had a list of the running processes, we researched the purpose of each process. To find how each process was initialized, we manually searched through the initialization scripts and matched each process with their initialization script.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_Privatix&amp;diff=16504</id>
		<title>COMP 3000 2011 Report: Privatix</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_Privatix&amp;diff=16504"/>
		<updated>2011-12-19T22:43:42Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Software Packaging */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Part 1=&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The name of our chosen distribution is the Privatix Live-System. The target audience for this system are people that are concerned about privacy, anonymity and security when web-surfing, transporting/editing sensitive data, sending email etc. Therefore, the goals of this distribution are mainly security and privacy related which means being able to provide security-conscious tools and applications integrated into a portable Operating System (OS) for anyone to use at any time. The distribution is meant to be portable, coming in the form of a live Compact Diak (CD) which can be installed on an external device or a Universal Serial Bus (USB) flash drive with an encrypted password to ensure that all your data remains private, even if your external device is lost or compromised. It should be noted that the live CD is only meant for installing the OS onto a USB in order to provide a portable, privacy conscious OS. The user should not solely rely on the live CD as the OS does not yet implement password protection. This is due to the fact that there are no user accounts on the live CD, user accounts are only implemented once the full OS is installed on a USB. The Privatix Live-System incorporates many security-conscious tools for safe editing, carrying sensitive data, encrypted communication and anonymous web surfing such as built in software to encrypt external devices, IceWeasel and TOR. &amp;lt;ref name=&amp;quot;privatix home&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/index.html.en Privatix home page](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This Privatix Live-System was developed in Germany by Markus Mandalka. It may be obtained by going to Markus Mandalka&#039;s website and navigating to the download page ([http://www.mandalka.name/privatix/download.html.en Mandalka]), selecting the version you wish to download (we chose the English version) and downloading it. The approximate size of the Privatix Live-System is 838 Megabytes (MB) for the full English version (there are smaller versions available which have had features such as GNOME removed).&amp;lt;ref name=&amp;quot;Privatix download page&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/download.html.en Privatix download page](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;. The Privatix Live-System was based off of Debian ([http://distrowatch.com/table.php?distribution=debian Debian]).&lt;br /&gt;
&lt;br /&gt;
==Installation/Startup==&lt;br /&gt;
[[File:PrivatixBoot.png|thumb|right|Privatix boot screen]]&lt;br /&gt;
[[File:PrivatixDesktop.png|thumb|right|Privatix desktop]]&lt;br /&gt;
Currently we have Privatix installed on an 8 Gigabyte (GB) USB stick in order to utilize the full power of the OS. However, Privatix can be used in a few ways other than installing it on an external device such as on a live CD/Digital Video Disk (DVD), or in a virtualized environment such as VirtualBox. It should be noted, however, that the full potential of the OS is only unlocked once the OS has been installed on an external device as it was meant to be. One main flaw in using either a virtual environment or the live CD is that user accounts, and hence password protection, are not implemented until the OS has been installed on an external device.&lt;br /&gt;
&lt;br /&gt;
To install the Privatix-Live System, the user must first download the .iso from the download page ([http://www.mandalka.name/privatix/download.html.en Mandalka]). Once the .iso file is downloaded, it is possible to either burn the operating system to a CD/DVD, use VirtualBox, or install it to a USB stick. &lt;br /&gt;
 &lt;br /&gt;
===CD/DVD===&lt;br /&gt;
	To install and boot Privatix with a CD/DVD, simply burn the operating system to a disk and boot from the CD/DVD created when prompted to in the BIOS.  While using the Live CD, the user will have access to almost all features of the operating system.  However, because no profiles were setup, if the user locks the computer, there will be no way to unlock it as no password was not setup.  Note that the main purpose of the live CD/DVD is to install the OS on an external device.&lt;br /&gt;
&lt;br /&gt;
===VirtualBox===&lt;br /&gt;
	Using VirtualBox requires simply having VirtualBox installed, and when prompted for the installation media to select the .iso file downloaded for Privatix.  &lt;br /&gt;
&lt;br /&gt;
When the system starts up select the Live option.  This brings up the main Desktop, while using VirtualBox the user will have access to all features available when using Privatix with the Live CD.  However there is one small extra level of security, this is provided by the host operating system.  this extra layer of security takes the form of the profile system of the host operating system.&lt;br /&gt;
&lt;br /&gt;
===USB===&lt;br /&gt;
	To install Privatix onto a USB stick, you first must be booted into Privatix Live through a CD/DVD.  Then you need to click the install icon on the Desktop to begin installing to a device.  It is then possible to select a device for Privatix to install itself on.  The installer will ask you if you would like to fill your device with blank data, this makes accessing data/recovering what was originally on the device much harder.  The installer will prompt you for a user password, as well as an admin password.  The installer will then start it&#039;s time consuming process of installing Privatix to the device.&lt;br /&gt;
&lt;br /&gt;
To boot into Privatix from the device, you can stop the computer booting, and then boot into the external device.  During booting, Privatix will prompt you for your the password set up during the installation.&lt;br /&gt;
&lt;br /&gt;
==Basic Operation==&lt;br /&gt;
&lt;br /&gt;
===On An External Device===&lt;br /&gt;
&lt;br /&gt;
The main way of utilizing the Privatix Live-System is done by installing the system on an external device.  In our case, we used an 8 GB USB stick. When the system is installed on an external device, it is easy to use the system for its intended purpose--having portable anonymous and secure system. We tested this portable version of the system on several laptops with no trouble and no noticeable discretion in use between the different machines. We attempted to use the the system for the following use cases: anonymous web browsing, secure email, data encryption and secure data transportation. &lt;br /&gt;
&lt;br /&gt;
Apart from this, Privatix also came with OpenOffice applications for editing all types of data and much of the basic GNOME functionality, including (but not limited to):&lt;br /&gt;
* Pidgin IM and Empathy IM Client for instant messaging&lt;br /&gt;
* Evolution Mail for sending and retrieving email&lt;br /&gt;
* gedit for text editing&lt;br /&gt;
&lt;br /&gt;
====Anonymous Web Browsing====&lt;br /&gt;
[[File:TOR.png|thumb|right|TOR is enabled by default in Privatix]]&lt;br /&gt;
The main thing we liked about this system was the secure and anonymous web browsing. The default browser in the system is IceWeasal (an older version of GNU IceCat--a re-branding of FireFox compatible with both Linux and Mac systems) which comes equipped with security features not available by default in FireFox. The main add on that I liked was that The Onion Router (TOR) is installed and enabled by default (it can be disabled if the user wishes). TOR is an open source project meant to provide absolute anonymity online--mainly preventing anyone from learning your location or browsing habits--by routing webpage requests through virtual tunnels made up of individual TOR nodes. Since no two &amp;quot;paths&amp;quot; for a request are ever the same there is no way for your traffic to be monitored. &amp;lt;ref name=&amp;quot;TOR Project - About&amp;quot;&amp;gt;[https://www.torproject.org/about/overview.html.en TOR Project - About](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Secure Email====&lt;br /&gt;
&lt;br /&gt;
The Privatix Live-System also came equipped with the security-conscious email client IceDove--an unbranded ThunderBird mail client (a cross-platform email client that provides government-grade security features). The email client was easily setup and used, supporting digital signing and message encryption via certificates by default (as with TOR, this could be disabled if the user wished). &amp;lt;ref name=&amp;quot;icedove&amp;quot;&amp;gt;[http://packages.debian.org/sid/icedove IceDove](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Encryption====&lt;br /&gt;
[[File:Encrypt.jpg|thumb|right|Software to encrypt external device]]&lt;br /&gt;
The Privatix Live-System also has the ability to encrypt external devices (besides the external device that the system is installed on). This meant that we could have an unlimited amount of encrypted data, not being limited to the size of the external device that the system itself is installed on. The ability to encrypt secondary external devices is very handy as much of the space on the external device that Privatix is installed on is taken up by the system itself, especially if one fills the device with blank decoy data on installation. The encryption software was easily used, well designed and was able to be utilized by absolute beginners of the system.&lt;br /&gt;
&lt;br /&gt;
====Secure Data Transportation====&lt;br /&gt;
There are two ways that Privatix fulfills its secure data transportation goal:&lt;br /&gt;
# When saving data on the external device with the Privatix Live-System, the data is automatically encrypted and is also password protected (since the portable version of Privatix requires a password to use it). &amp;lt;ref name=&amp;quot;Privatix FAQ&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/index.html.en Privatix FAQ](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
# As mentioned above, Privatix allows for the encryption of secondary external devices, hence meaning that data can be securely transported without even having the Privatix Live-System with you.&lt;br /&gt;
&lt;br /&gt;
====General Use====&lt;br /&gt;
&lt;br /&gt;
Even with the additional security features not available in other distributions, Privatix would still be a very desirable live system to use. It is portable, especially once installed on an external device, and easily used with little bloatware. The default applications such as OpenOffice for data editing, Pidgin for instant messaging, various graphics editors, video player, and CD burner/extractor ensured that they system was still perfectly functional for everyday use, even with security, not intense functionality, being the main focus.&lt;br /&gt;
&lt;br /&gt;
===Live CD and Virtual Box===&lt;br /&gt;
&lt;br /&gt;
We found that running Privatix using the live CD and VirtualBox was equivalent. &lt;br /&gt;
&lt;br /&gt;
When booting the live CD in VirtualBox, there are certain key features of the Privatix Live-System you are missing (mainly because these features are meant for the portable version to be installed on an external device). However, just booting from the live CD still gives a lot of the functionality I would use the system for--mainly the anonymous web browsing, secure email and data encryption. The key differences were the lack of portability and the inability to save any data on the live CD or VirtualBox environment.&lt;br /&gt;
&lt;br /&gt;
When using only the live CD or VirtualBox all files are deleted when the system is shut down. In addition to, any files saved to the desktop by the user will not appear.  They will be hidden from view, but can be viewed by opening the terminal and navigating to the desktop and running the ls command.&lt;br /&gt;
&lt;br /&gt;
The main flaw in only using the system in these mediums is that the added protection of having a user account and password to access the system are no present on the live CD or when using Privatix in a virtual machine. This is due to the fact that Privatix does not implement user accounts and password protection until it has been fully installed onto an external device. As the main goal of this distribution is privacy, it would be highly recommended that the user fully install the OS onto an external device for the added security of password protection.&lt;br /&gt;
&lt;br /&gt;
==Usage Evaluation==&lt;br /&gt;
&lt;br /&gt;
During our use of Privatix, we found it preformed on par for what it was described as, a secure and portable system. The tools provided to encrypt data and the secure browser with add-ons for anonymity especially supported this belief. However we also found some parts of the distribution that were a cause for concern.  To begin with, there was a slight language barrier as the system was originally written in German. This was made apparent by the frequent grammar mistakes in both the existing English documentation and the operating system itself indicating that English was not the primary language for the writers of this operating system. Most of the documentation for the operating system is also in German. Those who maintain Privatix and its project website are in the process of translating all their documentation as to be available in both English and German, though currently most of the supporting documentation and FAQ are in German. This made it hard to troubleshoot anything that went wrong with the system during installation or use. &lt;br /&gt;
&lt;br /&gt;
We also noticed that there was no wireless drivers on either portable versions of the OS (installed on an external device or simply using the Live CD or booting up in a virtual machine) so wireless networks could not be connected to.  This causes a problem because an operating system on a USB stick should be completely portable, however this driver requires you to have a hard line to use the Internet. It was also noticed that when using Privatix in VirtualBox that even though there was no wireless drivers in Privatix, the wireless capability was provided by the host OS (Windows).&lt;br /&gt;
&lt;br /&gt;
Lastly, when we tried to install Privatix onto a USB it took several attempts. We discovered that to avoid many of the problems we encountered, it is better to use a larger (preferably at least 8GBs) external device for installation and to defer from filling the external device with blank decoy data during installation on an external device.&lt;br /&gt;
&lt;br /&gt;
However once connected to the Internet, all software seems to work as it should. The more basic applications such as OpenOffice, the instant messaging and email clients, multimedia applications etc. function with no problems encountered, working much as they do in any other Linux distribution. The security tools also seem to work as they should. However since we do not know how to test the limits of its security measures we do not know for sure how secure these programs actually are. Overall, Privatix seems to be a very functional and portable distribution, allowing users access to standard applications for tasks such as editing and transporting data, sending/receiving email, instant messaging and multimedia applications with the added bonus of being completely secure and anonymous.&lt;br /&gt;
&lt;br /&gt;
=Part 2=&lt;br /&gt;
==Software Packaging==&lt;br /&gt;
[[File:dpkg_out.png|thumb|right|Package listing, dpkg]]&lt;br /&gt;
[[File:aptitude_out.png|thumb|right|Package listing, aptitude]]&lt;br /&gt;
The packaging format that was used for the Privatix-Live System was DEB (based on the Debian packaging format). &amp;lt;ref name=&amp;quot;privatix distrowatch&amp;quot;&amp;gt;[http://distrowatch.com/table.php?distribution=privatix Privatix Distrowatch Page](Last accessed 12-18-11)&amp;lt;/ref&amp;gt; The utilities used with this packaging format were dpkg and aptitude. Dpkg is used as the operating system&#039;s package management utility, with aptitude acting as the more user-friendly front end version. Aptitude made finding a list of installed packages quite easy. Aptitude allows you to see a full list of installed packages, with the packages being segregated into categories such as mail, web, shells and utils. As well as using aptitude, the command line can be used to access a list of installed packages. To do this, input the following in terminal and a list of all installed packages is generated. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -l&lt;br /&gt;
&lt;br /&gt;
Though knowing how to do this in command line is useful, we found that using aptitude was generally better as the packages are segregated into categories which made viewing the list of installed packages more simple. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To add a package within Privatix, we found the easiest way was to use the one of the following commands provided by dpkg:&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -i &amp;lt;package name&amp;gt;&lt;br /&gt;
          or &lt;br /&gt;
 $ dpkg --install &amp;lt;package name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These commands function in the same way which means they will either install a package, or upgrade already installed versions of the package. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To remove a  package within Privatix, we found the easist way was to use either of the following commands provided by dpkg:&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -r &amp;lt;package name&amp;gt;&lt;br /&gt;
          or &lt;br /&gt;
 $ dpkg -P &amp;lt;package name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When using &amp;quot;dpkg -r &amp;lt;package name&amp;gt;&amp;quot;, everything related to the package &#039;&#039;except&#039;&#039; the configuration files are removed. To fully remove a package, however, we used &amp;quot;dpkg -P &amp;lt;package name&amp;gt;&amp;quot; which removes the entire package, including the configuration files. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We found that the software catalog for this distribution was quite extensive, especially since this distribution is meant to be portable. Privatix includes all the standard packages included with Debian (e.g. libc), as well as several other utilities meant to increase security and privacy while using the system such as IceDove, TOR and TORButton.&lt;br /&gt;
&lt;br /&gt;
==Major Package Versions==&lt;br /&gt;
&lt;br /&gt;
For this section of the report, we needed to speculate as to why a certain &#039;&#039;version&#039;&#039; of a package was included, not just the package itself. We determined, with the release date of Privatix &amp;lt;ref name=&amp;quot;privatix distrowatch&amp;quot;&amp;gt;[http://distrowatch.com/table.php?distribution=privatix Privatix on Distrowatch](Last accessed 12-19-11)&amp;lt;/ref&amp;gt;, that the usual reasoning behind a certain version of a package being included was that the included version was the stable release of the package at that time. We also needed to determine how heavily modified by the distribution&#039;s author packages included within our distribution were. However, the distribution&#039;s author has stated that everything included is mainly based on Debian. &amp;lt;ref name=&amp;quot;privatix documentation&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/doc.html Privatix Documentation (German)](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The packages within Privatix have not been modified, the distribution&#039;s author has mainly brought together several security and privacy conscious utilities into one distribution for portable and daily use. As such, many of the packages that come with the standard install of Privatix have been included since they are included with the standard install of Debian at the time this distribution was made. Please also note that this reference was taken from the main page of the distribution but that, to view it, you will need to translate it (we used Google translate) as much of the documentation for this distribution is in German. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr valign=&amp;quot;top&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;10%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Category&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;15%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Package&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;10%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Version&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;15%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Upstream Source&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;20%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Vintage&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;30%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Package Details&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;Kernel&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt;linux-base&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;2.6.32-31 &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;None Provided&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
		This version of the kernel was released in December of 2009, making it just under two years old. &amp;lt;ref name=&amp;quot;linux kernel&amp;quot;&amp;gt;[http://kernelnewbies.org/Linux_2_6_32 Linux Kernel v2.6.32 Info Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version of the Linux kernel was released just yesterday (11/11/2011), this version being listed as 3.1.1. &amp;lt;ref name=&amp;quot;current kernel&amp;quot;&amp;gt;[http://www.kernel.org/ Current Stable Linux Kernel](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This puts the version of the Linux kernel on Privatix as being two years behind the current stable version of the Linux kernel.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
		We believe that these packages were included within the distribution as they are the standard packages for the Linux kernel included in the standard install of Debian at the time Privatix was released.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;linux-image-2.6.32-5-686&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;2.6.32-31&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;linux-image-2.6-282&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;2.6.32+39&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;libc&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;libc-bin&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;2.11.2-10&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;http://www.eglibc.org&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
		This version of libc was released in January 2011, making it approximately 11 months old.&amp;lt;ref name=&amp;quot;eglibc&amp;quot;&amp;gt;[http://packages.qa.debian.org/e/eglibc.html eglibc Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This version is also the current stable version of libc, as listed on Debian. &amp;lt;ref name=&amp;quot;eglibc&amp;quot;&amp;gt;[http://packages.qa.debian.org/e/eglibc.html eglibc Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, a newer, unstable version (version 2.13-21), is currently undergoing testing.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;This package was included as it is also included in the standard install of Debian as well as that all Linux-based systems come with a version of libc.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;libc6&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Shell&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;bash&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;4.1-3&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://tiswww.case.edu/php/chet/bash/bashtop.html&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This version of bash was released in, approximately, April 2010. &amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; It is also the current stable version of bash. &amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, last month, version 4.2 of bash was pushed into testing and became the current experimental version.&amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This package was included as bash is the version of command line included with the standard install of Debian.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Utilities&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;busybox&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;1:1.17.1-8&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://www.qtsoftware.com/&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This version of busybox was released in, approximately, November 2010. &amp;lt;ref name=&amp;quot;busybox&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/busybox.html busybox Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; It is also the current stable version of busybox as listed on Debian. &amp;lt;ref name=&amp;quot;busybox&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/busybox.html busybox Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This package was included as it the version of busybox included with the standard install of Debian.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
   &lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Software Packaging&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;dpkg&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;1.15.8.10&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://wiki.debian.org/Teams/Dpkg&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of dpkg was released in February of 2011, making it 10 months old. &amp;lt;ref name=&amp;quot;dpkg changelog&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/dpkg/1.15.8.10ubuntu1 Dpkg Changelog](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The current stable version of dpkg, as listed on Debian, is version 1.15.8.11 which was released in April of 2011. &amp;lt;ref name=&amp;quot;dpkg&amp;quot;&amp;gt;[http://packages.qa.debian.org/d/dpkg.html Dpkg Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of dpkg included with Privatix at 3 months behind the latest stable version.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This package was included since dpkg is the package management system of Debian, the distribution that Privatix is based off of.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;Web Browser&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;IceWeasel&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;3.5.16-6&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;None Provided&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of IceWeasel was released in March 2011, making it 9 months old. &amp;lt;ref name=&amp;quot;iceweasel&amp;quot;&amp;gt;[http://packages.qa.debian.org/i/iceweasel.html IceWeasel Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version is version 3.5.16-11 which was released in November 2011. &amp;lt;ref name=&amp;quot;iceweasel&amp;quot;&amp;gt;[http://packages.qa.debian.org/i/iceweasel.html IceWeasel Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of IceWeasel included with Privatix at 9 months behind the latest stable release.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		IceWeasel was included within this distribution as it is a more security conscious browser than more mainstream browsers such as Mozilla Firefox. IceWeasel, an older version of GNU IceCat (a rebranding of FireFox), comes equipped with security features not available by default in FireFox.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Tor&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;0.201029-1&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;https://www.torproject.org&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This version of TOR was released in January 2011, making it 11 months old. &amp;lt;ref name=&amp;quot;tor changelog&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/tor/0.2.1.29-1 TOR Changelog](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The latest stable release of TOR is version 0.2.1.30-1 which was released in July 2011. &amp;lt;ref name=&amp;quot;tor&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/tor TOR on LaunchPad](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This package was included to help increase security, anonymity and privacy while web browsing which is one of the main goals of the Privatix distribution. For more information on TOR, see the Basic Operation section of the report, under Anonymous Web Browsing. &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;TOR Button (xul-ext-torbutton)&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;1.2.5-3&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;https://www.torproject.org/torbutton/&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This version of TOR Button was released in October 2010, making it just over a year old.&amp;lt;ref name=&amp;quot;torbutton&amp;quot;&amp;gt;[https://www.torproject.org/torbutton/ TORButton](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version of this program is version 1.4.4.1 which was released last month. &amp;lt;ref name=&amp;quot;torbutton&amp;quot;&amp;gt;[https://www.torproject.org/torbutton/ TORButton](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of TORButton included with Privatix at about a year behind the latest stable release.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This package was included in order to add to the functionality of TOR. This add-on allows the user to enable and disable TOR with the push of a button, located in the corner of their browser. &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Email&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;icedove&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;3.0.11-1+s&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;None Provided&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of IceDove is the current stable version as listed on Debian. &amp;lt;ref name=&amp;quot;icedove debian&amp;quot;&amp;gt;[http://packages.qadebian.org/i/icedove.html IceDove Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, this version was included within Privatix before it was made stable.  It was released as an unstable version in December 2010 and was later released as the current stable version in October 2011. &amp;lt;ref name=&amp;quot;icedove debian&amp;quot;&amp;gt;[http://packages.qadebian.org/i/icedove.html IceDove Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This email client was included due to the fact that it is a more security-conscious email client, providing government-grade security features, than others such as the regular version of ThunderBird. For more information on this program, refer to the Basic Operation section of this report under Secure Email.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Other&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;pidgin&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;2.7.3.1+sq&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://www.pidgin.im&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of the Pidgin was released in October 2010, and is also the current stable version of Pidgin as listed on Debian. &amp;lt;ref name=&amp;quot;pidgin&amp;quot;&amp;gt;[http://packages.qa.debian.org/p/pidgin.html Pidgin Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This package was included as Pidgin is the default IM client included with the standard install of Debian, the system on which Privatix is based.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Initialization==&lt;br /&gt;
&lt;br /&gt;
Privatix generally follows the same initialization process as Debian. Privatix initializes by first executing first the BIOS then the boot loader code. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; Privatix uses the same boot loader as Debian which is System V initialization. /etc/inittab is the configuration file, with the /sbin/init program initializing the system following the description in this configuration file. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; inittab will set the default run level of Privatix, which is run level 2. Following this, all the scripts located in /etc/rc2.d are executed alphabetically. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; These scripts are:&lt;br /&gt;
&lt;br /&gt;
[[File:pstree1.png|thumb|right|Process tree after start up]]&lt;br /&gt;
[[File:pstree_.png|thumb|right|Process tree after start up cont.]]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;S01polipo&#039;&#039;&#039;: polipo web cache--a small and fast caching web proxy&lt;br /&gt;
* &#039;&#039;&#039;S01rsyslog&#039;&#039;&#039;: enhanced multi-thread syslogd which is Linux system logging utility&lt;br /&gt;
* &#039;&#039;&#039;S01sudo&#039;&#039;&#039;: provides sudo&lt;br /&gt;
* &#039;&#039;&#039;S02cron&#039;&#039;&#039;: starts the scheduler of the system&lt;br /&gt;
* &#039;&#039;&#039;S02dbus&#039;&#039;&#039;: utility to send messages between processes and applications&lt;br /&gt;
* &#039;&#039;&#039;S02rsync&#039;&#039;&#039;: opens rsync--a program that allows files to be copied to and from remote machines&lt;br /&gt;
* &#039;&#039;&#039;S02tor&#039;&#039;&#039;: starts TOR (for more information, see above)&lt;br /&gt;
* &#039;&#039;&#039;S03avahi-daemon&#039;&#039;&#039;: starts the zeroconf daemon which is used for configuring the network automatically&lt;br /&gt;
* &#039;&#039;&#039;S03bluetooth&#039;&#039;&#039;: launches bluetooth&lt;br /&gt;
* &#039;&#039;&#039;S03networ-manager&#039;&#039;&#039;: starts a daemon that automatically switches network connections to the best available connection&lt;br /&gt;
* &#039;&#039;&#039;S04openvpn&#039;&#039;&#039;: starts openvpn service--a generic vpn service&lt;br /&gt;
* &#039;&#039;&#039;S05gdm3&#039;&#039;&#039;: script for the GNOME display manager&lt;br /&gt;
* &#039;&#039;&#039;S06bootlogs&#039;&#039;&#039;: the log file handling to be done during bootup--mainly things that don&#039;t need to be done particularly early in the boot process&lt;br /&gt;
* &#039;&#039;&#039;S07rc.local&#039;&#039;&#039;: runs the /etc/rc.local file if it exists--by default this script does nothing, it is used only to exit&lt;br /&gt;
* &#039;&#039;&#039;S07rmologin&#039;&#039;&#039;: removes the /etc/nologin file as the last step in the boot process&lt;br /&gt;
* &#039;&#039;&#039;S07stop-bootlogd&#039;&#039;&#039;: runs the /etc/rc.local file again, if it exists--by default this script does nothing, it is used only to exit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Following this, the system is initialized. The processes running on the newly initialized system and what initializes them are as follows: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot;&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;20%&amp;quot;&amp;gt; &amp;lt;b&amp;gt;Process Name&amp;lt;/b&amp;gt; &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;60%&amp;quot;&amp;gt; &amp;lt;b&amp;gt;Description&amp;lt;/b&amp;gt; &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;20%&amp;quot;&amp;gt; &amp;lt;b&amp;gt;Initialized By&amp;lt;/b&amp;gt; &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; NetworkManager &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Daemon that automatically switches network connections to the best available connection  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S03networ-manager init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; avahi-daemon &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; zeroconf daemon which is used for configuring the network automatically  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S03avahi-daemon init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; bluetoothd &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Enables bluetooth to be used &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S03bluetooth init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; cron &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Scheduler of Debian systems  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S02cron init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; dbus-launch &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Utility to send messages between processes and applications  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S02dbus init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gdm3 &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td rowspan=&amp;quot;4&amp;quot;&amp;gt; GNOME display manager &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td rowspan=&amp;quot;4&amp;quot;&amp;gt; S05gdm3 init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gnome-screensav &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gnome-settings- &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gnome-terminal &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; polipo &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; polipo web cache--a small and fast caching web proxy &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S01polipo init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; polkitd &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Polkit is an application level toolkit to allow unprivileged and privileged processes talk to one another. &amp;lt;ref name=&amp;quot;polkit&amp;quot;&amp;gt;[http://live.gnome.org/Seahorse Polkit](Last accessed 12-18-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; tor &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; TOR is an open source project meant to provide  anonymity online--mainly preventing anyone from learning your location or browsing habits--by routing webpage requests through virtual tunnels made up of individual TOR nodes. Since no two &amp;quot;paths&amp;quot; for a request are ever the same there is no way for your traffic to be monitored. &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S02tor init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We found this information by first confirming that Privatix used the same style of initializing as Debian. Once we ascertained this, we researched the Debian boot process. Privatix followed the same steps up until the loading of the scripts, which had some scripts that differed from Debian (e.g. TOR). Following this, we researched each of the scripts run on Privatix&#039;s default boot level of 2. The scripts are listed above, in the order they execute. To find the purpose of each of the scripts and what programs they opened, we manually went through each of the scripts. To find the processes running on the newly initialized system, we used the command &amp;quot;ps tree&amp;quot;. Once we had a list of the running processes, we researched the purpose of each process. To find how each process was initialized, we manually searched through the initialization scripts and matched each process with their initialization script.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_Privatix&amp;diff=16501</id>
		<title>COMP 3000 2011 Report: Privatix</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_Privatix&amp;diff=16501"/>
		<updated>2011-12-19T22:41:47Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Major Package Versions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Part 1=&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The name of our chosen distribution is the Privatix Live-System. The target audience for this system are people that are concerned about privacy, anonymity and security when web-surfing, transporting/editing sensitive data, sending email etc. Therefore, the goals of this distribution are mainly security and privacy related which means being able to provide security-conscious tools and applications integrated into a portable Operating System (OS) for anyone to use at any time. The distribution is meant to be portable, coming in the form of a live Compact Diak (CD) which can be installed on an external device or a Universal Serial Bus (USB) flash drive with an encrypted password to ensure that all your data remains private, even if your external device is lost or compromised. It should be noted that the live CD is only meant for installing the OS onto a USB in order to provide a portable, privacy conscious OS. The user should not solely rely on the live CD as the OS does not yet implement password protection. This is due to the fact that there are no user accounts on the live CD, user accounts are only implemented once the full OS is installed on a USB. The Privatix Live-System incorporates many security-conscious tools for safe editing, carrying sensitive data, encrypted communication and anonymous web surfing such as built in software to encrypt external devices, IceWeasel and TOR. &amp;lt;ref name=&amp;quot;privatix home&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/index.html.en Privatix home page](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This Privatix Live-System was developed in Germany by Markus Mandalka. It may be obtained by going to Markus Mandalka&#039;s website and navigating to the download page ([http://www.mandalka.name/privatix/download.html.en Mandalka]), selecting the version you wish to download (we chose the English version) and downloading it. The approximate size of the Privatix Live-System is 838 Megabytes (MB) for the full English version (there are smaller versions available which have had features such as GNOME removed).&amp;lt;ref name=&amp;quot;Privatix download page&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/download.html.en Privatix download page](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;. The Privatix Live-System was based off of Debian ([http://distrowatch.com/table.php?distribution=debian Debian]).&lt;br /&gt;
&lt;br /&gt;
==Installation/Startup==&lt;br /&gt;
[[File:PrivatixBoot.png|thumb|right|Privatix boot screen]]&lt;br /&gt;
[[File:PrivatixDesktop.png|thumb|right|Privatix desktop]]&lt;br /&gt;
Currently we have Privatix installed on an 8 Gigabyte (GB) USB stick in order to utilize the full power of the OS. However, Privatix can be used in a few ways other than installing it on an external device such as on a live CD/Digital Video Disk (DVD), or in a virtualized environment such as VirtualBox. It should be noted, however, that the full potential of the OS is only unlocked once the OS has been installed on an external device as it was meant to be. One main flaw in using either a virtual environment or the live CD is that user accounts, and hence password protection, are not implemented until the OS has been installed on an external device.&lt;br /&gt;
&lt;br /&gt;
To install the Privatix-Live System, the user must first download the .iso from the download page ([http://www.mandalka.name/privatix/download.html.en Mandalka]). Once the .iso file is downloaded, it is possible to either burn the operating system to a CD/DVD, use VirtualBox, or install it to a USB stick. &lt;br /&gt;
 &lt;br /&gt;
===CD/DVD===&lt;br /&gt;
	To install and boot Privatix with a CD/DVD, simply burn the operating system to a disk and boot from the CD/DVD created when prompted to in the BIOS.  While using the Live CD, the user will have access to almost all features of the operating system.  However, because no profiles were setup, if the user locks the computer, there will be no way to unlock it as no password was not setup.  Note that the main purpose of the live CD/DVD is to install the OS on an external device.&lt;br /&gt;
&lt;br /&gt;
===VirtualBox===&lt;br /&gt;
	Using VirtualBox requires simply having VirtualBox installed, and when prompted for the installation media to select the .iso file downloaded for Privatix.  &lt;br /&gt;
&lt;br /&gt;
When the system starts up select the Live option.  This brings up the main Desktop, while using VirtualBox the user will have access to all features available when using Privatix with the Live CD.  However there is one small extra level of security, this is provided by the host operating system.  this extra layer of security takes the form of the profile system of the host operating system.&lt;br /&gt;
&lt;br /&gt;
===USB===&lt;br /&gt;
	To install Privatix onto a USB stick, you first must be booted into Privatix Live through a CD/DVD.  Then you need to click the install icon on the Desktop to begin installing to a device.  It is then possible to select a device for Privatix to install itself on.  The installer will ask you if you would like to fill your device with blank data, this makes accessing data/recovering what was originally on the device much harder.  The installer will prompt you for a user password, as well as an admin password.  The installer will then start it&#039;s time consuming process of installing Privatix to the device.&lt;br /&gt;
&lt;br /&gt;
To boot into Privatix from the device, you can stop the computer booting, and then boot into the external device.  During booting, Privatix will prompt you for your the password set up during the installation.&lt;br /&gt;
&lt;br /&gt;
==Basic Operation==&lt;br /&gt;
&lt;br /&gt;
===On An External Device===&lt;br /&gt;
&lt;br /&gt;
The main way of utilizing the Privatix Live-System is done by installing the system on an external device.  In our case, we used an 8 GB USB stick. When the system is installed on an external device, it is easy to use the system for its intended purpose--having portable anonymous and secure system. We tested this portable version of the system on several laptops with no trouble and no noticeable discretion in use between the different machines. We attempted to use the the system for the following use cases: anonymous web browsing, secure email, data encryption and secure data transportation. &lt;br /&gt;
&lt;br /&gt;
Apart from this, Privatix also came with OpenOffice applications for editing all types of data and much of the basic GNOME functionality, including (but not limited to):&lt;br /&gt;
* Pidgin IM and Empathy IM Client for instant messaging&lt;br /&gt;
* Evolution Mail for sending and retrieving email&lt;br /&gt;
* gedit for text editing&lt;br /&gt;
&lt;br /&gt;
====Anonymous Web Browsing====&lt;br /&gt;
[[File:TOR.png|thumb|right|TOR is enabled by default in Privatix]]&lt;br /&gt;
The main thing we liked about this system was the secure and anonymous web browsing. The default browser in the system is IceWeasal (an older version of GNU IceCat--a re-branding of FireFox compatible with both Linux and Mac systems) which comes equipped with security features not available by default in FireFox. The main add on that I liked was that The Onion Router (TOR) is installed and enabled by default (it can be disabled if the user wishes). TOR is an open source project meant to provide absolute anonymity online--mainly preventing anyone from learning your location or browsing habits--by routing webpage requests through virtual tunnels made up of individual TOR nodes. Since no two &amp;quot;paths&amp;quot; for a request are ever the same there is no way for your traffic to be monitored. &amp;lt;ref name=&amp;quot;TOR Project - About&amp;quot;&amp;gt;[https://www.torproject.org/about/overview.html.en TOR Project - About](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Secure Email====&lt;br /&gt;
&lt;br /&gt;
The Privatix Live-System also came equipped with the security-conscious email client IceDove--an unbranded ThunderBird mail client (a cross-platform email client that provides government-grade security features). The email client was easily setup and used, supporting digital signing and message encryption via certificates by default (as with TOR, this could be disabled if the user wished). &amp;lt;ref name=&amp;quot;icedove&amp;quot;&amp;gt;[http://packages.debian.org/sid/icedove IceDove](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Encryption====&lt;br /&gt;
[[File:Encrypt.jpg|thumb|right|Software to encrypt external device]]&lt;br /&gt;
The Privatix Live-System also has the ability to encrypt external devices (besides the external device that the system is installed on). This meant that we could have an unlimited amount of encrypted data, not being limited to the size of the external device that the system itself is installed on. The ability to encrypt secondary external devices is very handy as much of the space on the external device that Privatix is installed on is taken up by the system itself, especially if one fills the device with blank decoy data on installation. The encryption software was easily used, well designed and was able to be utilized by absolute beginners of the system.&lt;br /&gt;
&lt;br /&gt;
====Secure Data Transportation====&lt;br /&gt;
There are two ways that Privatix fulfills its secure data transportation goal:&lt;br /&gt;
# When saving data on the external device with the Privatix Live-System, the data is automatically encrypted and is also password protected (since the portable version of Privatix requires a password to use it). &amp;lt;ref name=&amp;quot;Privatix FAQ&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/index.html.en Privatix FAQ](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
# As mentioned above, Privatix allows for the encryption of secondary external devices, hence meaning that data can be securely transported without even having the Privatix Live-System with you.&lt;br /&gt;
&lt;br /&gt;
====General Use====&lt;br /&gt;
&lt;br /&gt;
Even with the additional security features not available in other distributions, Privatix would still be a very desirable live system to use. It is portable, especially once installed on an external device, and easily used with little bloatware. The default applications such as OpenOffice for data editing, Pidgin for instant messaging, various graphics editors, video player, and CD burner/extractor ensured that they system was still perfectly functional for everyday use, even with security, not intense functionality, being the main focus.&lt;br /&gt;
&lt;br /&gt;
===Live CD and Virtual Box===&lt;br /&gt;
&lt;br /&gt;
We found that running Privatix using the live CD and VirtualBox was equivalent. &lt;br /&gt;
&lt;br /&gt;
When booting the live CD in VirtualBox, there are certain key features of the Privatix Live-System you are missing (mainly because these features are meant for the portable version to be installed on an external device). However, just booting from the live CD still gives a lot of the functionality I would use the system for--mainly the anonymous web browsing, secure email and data encryption. The key differences were the lack of portability and the inability to save any data on the live CD or VirtualBox environment.&lt;br /&gt;
&lt;br /&gt;
When using only the live CD or VirtualBox all files are deleted when the system is shut down. In addition to, any files saved to the desktop by the user will not appear.  They will be hidden from view, but can be viewed by opening the terminal and navigating to the desktop and running the ls command.&lt;br /&gt;
&lt;br /&gt;
The main flaw in only using the system in these mediums is that the added protection of having a user account and password to access the system are no present on the live CD or when using Privatix in a virtual machine. This is due to the fact that Privatix does not implement user accounts and password protection until it has been fully installed onto an external device. As the main goal of this distribution is privacy, it would be highly recommended that the user fully install the OS onto an external device for the added security of password protection.&lt;br /&gt;
&lt;br /&gt;
==Usage Evaluation==&lt;br /&gt;
&lt;br /&gt;
During our use of Privatix, we found it preformed on par for what it was described as, a secure and portable system. The tools provided to encrypt data and the secure browser with add-ons for anonymity especially supported this belief. However we also found some parts of the distribution that were a cause for concern.  To begin with, there was a slight language barrier as the system was originally written in German. This was made apparent by the frequent grammar mistakes in both the existing English documentation and the operating system itself indicating that English was not the primary language for the writers of this operating system. Most of the documentation for the operating system is also in German. Those who maintain Privatix and its project website are in the process of translating all their documentation as to be available in both English and German, though currently most of the supporting documentation and FAQ are in German. This made it hard to troubleshoot anything that went wrong with the system during installation or use. &lt;br /&gt;
&lt;br /&gt;
We also noticed that there was no wireless drivers on either portable versions of the OS (installed on an external device or simply using the Live CD or booting up in a virtual machine) so wireless networks could not be connected to.  This causes a problem because an operating system on a USB stick should be completely portable, however this driver requires you to have a hard line to use the Internet. It was also noticed that when using Privatix in VirtualBox that even though there was no wireless drivers in Privatix, the wireless capability was provided by the host OS (Windows).&lt;br /&gt;
&lt;br /&gt;
Lastly, when we tried to install Privatix onto a USB it took several attempts. We discovered that to avoid many of the problems we encountered, it is better to use a larger (preferably at least 8GBs) external device for installation and to defer from filling the external device with blank decoy data during installation on an external device.&lt;br /&gt;
&lt;br /&gt;
However once connected to the Internet, all software seems to work as it should. The more basic applications such as OpenOffice, the instant messaging and email clients, multimedia applications etc. function with no problems encountered, working much as they do in any other Linux distribution. The security tools also seem to work as they should. However since we do not know how to test the limits of its security measures we do not know for sure how secure these programs actually are. Overall, Privatix seems to be a very functional and portable distribution, allowing users access to standard applications for tasks such as editing and transporting data, sending/receiving email, instant messaging and multimedia applications with the added bonus of being completely secure and anonymous.&lt;br /&gt;
&lt;br /&gt;
=Part 2=&lt;br /&gt;
==Software Packaging==&lt;br /&gt;
[[File:dpkg_out.png|thumb|right|Package listing, dpkg]]&lt;br /&gt;
[[File:aptitude_out.png|thumb|right|Package listing, aptitude]]&lt;br /&gt;
The packaging format that was used for the Privatix-Live System was DEB (based on the Debian packaging format). &amp;lt;ref name=&amp;quot;privatix distrowatch&amp;quot;&amp;gt;[http://distrowatch.com/table.php?distribution=privatix Privatix Distrowatch Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The utilities used with this packaging format were dpkg and aptitude. Dpkg is used as the operating system&#039;s package management utility, with aptitude acting as the more user-friendly front end version. Aptitude made finding a list of installed packages quite easy. Aptitude allows you to see a full list of installed packages, with the packages being segregated into categories such as mail, web, shells and utils. As well as using aptitude, the command line can be used to access a list of installed packages. To do this, input the following in terminal and a list of all installed packages is generated. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -l&lt;br /&gt;
&lt;br /&gt;
Though knowing how to do this in command line is useful, we found that using aptitude was generally better as the packages are segregated into categories which made viewing the list of installed packages more simple. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To add a package within Privatix, we found the easiest way was to use the one of the following commands provided by dpkg:&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -i &amp;lt;package name&amp;gt;&lt;br /&gt;
          or &lt;br /&gt;
 $ dpkg --install &amp;lt;package name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These commands function in the same way which means they will either install a package, or upgrade already installed versions of the package. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To remove a  package within Privatix, we found the easist way was to use either of the following commands provided by dpkg:&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -r &amp;lt;package name&amp;gt;&lt;br /&gt;
          or &lt;br /&gt;
 $ dpkg -P &amp;lt;package name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When using &amp;quot;dpkg -r &amp;lt;package name&amp;gt;&amp;quot;, everything related to the package &#039;&#039;except&#039;&#039; the configuration files are removed. To fully remove a package, however, we used &amp;quot;dpkg -P &amp;lt;package name&amp;gt;&amp;quot; which removes the entire package, including the configuration files. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We found that the software catalog for this distribution was quite extensive, especially since this distribution is meant to be portable. Privatix includes all the standard packages included with Debian (e.g. libc), as well as several other utilities meant to increase security and privacy while using the system such as IceDove, TOR and TORButton.&lt;br /&gt;
&lt;br /&gt;
==Major Package Versions==&lt;br /&gt;
&lt;br /&gt;
For this section of the report, we needed to speculate as to why a certain &#039;&#039;version&#039;&#039; of a package was included, not just the package itself. We determined, with the release date of Privatix &amp;lt;ref name=&amp;quot;privatix distrowatch&amp;quot;&amp;gt;[http://distrowatch.com/table.php?distribution=privatix Privatix on Distrowatch](Last accessed 12-19-11)&amp;lt;/ref&amp;gt;, that the usual reasoning behind a certain version of a package being included was that the included version was the stable release of the package at that time. We also needed to determine how heavily modified by the distribution&#039;s author packages included within our distribution were. However, the distribution&#039;s author has stated that everything included is mainly based on Debian. &amp;lt;ref name=&amp;quot;privatix documentation&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/doc.html Privatix Documentation (German)](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The packages within Privatix have not been modified, the distribution&#039;s author has mainly brought together several security and privacy conscious utilities into one distribution for portable and daily use. As such, many of the packages that come with the standard install of Privatix have been included since they are included with the standard install of Debian at the time this distribution was made. Please also note that this reference was taken from the main page of the distribution but that, to view it, you will need to translate it (we used Google translate) as much of the documentation for this distribution is in German. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr valign=&amp;quot;top&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;10%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Category&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;15%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Package&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;10%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Version&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;15%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Upstream Source&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;20%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Vintage&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;30%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Package Details&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;Kernel&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt;linux-base&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;2.6.32-31 &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;None Provided&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
		This version of the kernel was released in December of 2009, making it just under two years old. &amp;lt;ref name=&amp;quot;linux kernel&amp;quot;&amp;gt;[http://kernelnewbies.org/Linux_2_6_32 Linux Kernel v2.6.32 Info Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version of the Linux kernel was released just yesterday (11/11/2011), this version being listed as 3.1.1. &amp;lt;ref name=&amp;quot;current kernel&amp;quot;&amp;gt;[http://www.kernel.org/ Current Stable Linux Kernel](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This puts the version of the Linux kernel on Privatix as being two years behind the current stable version of the Linux kernel.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
		We believe that these packages were included within the distribution as they are the standard packages for the Linux kernel included in the standard install of Debian at the time Privatix was released.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;linux-image-2.6.32-5-686&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;2.6.32-31&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;linux-image-2.6-282&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;2.6.32+39&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;libc&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;libc-bin&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;2.11.2-10&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;http://www.eglibc.org&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
		This version of libc was released in January 2011, making it approximately 11 months old.&amp;lt;ref name=&amp;quot;eglibc&amp;quot;&amp;gt;[http://packages.qa.debian.org/e/eglibc.html eglibc Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This version is also the current stable version of libc, as listed on Debian. &amp;lt;ref name=&amp;quot;eglibc&amp;quot;&amp;gt;[http://packages.qa.debian.org/e/eglibc.html eglibc Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, a newer, unstable version (version 2.13-21), is currently undergoing testing.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;This package was included as it is also included in the standard install of Debian as well as that all Linux-based systems come with a version of libc.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;libc6&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Shell&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;bash&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;4.1-3&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://tiswww.case.edu/php/chet/bash/bashtop.html&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This version of bash was released in, approximately, April 2010. &amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; It is also the current stable version of bash. &amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, last month, version 4.2 of bash was pushed into testing and became the current experimental version.&amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This package was included as bash is the version of command line included with the standard install of Debian.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Utilities&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;busybox&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;1:1.17.1-8&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://www.qtsoftware.com/&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This version of busybox was released in, approximately, November 2010. &amp;lt;ref name=&amp;quot;busybox&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/busybox.html busybox Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; It is also the current stable version of busybox as listed on Debian. &amp;lt;ref name=&amp;quot;busybox&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/busybox.html busybox Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This package was included as it the version of busybox included with the standard install of Debian.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
   &lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Software Packaging&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;dpkg&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;1.15.8.10&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://wiki.debian.org/Teams/Dpkg&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of dpkg was released in February of 2011, making it 10 months old. &amp;lt;ref name=&amp;quot;dpkg changelog&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/dpkg/1.15.8.10ubuntu1 Dpkg Changelog](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The current stable version of dpkg, as listed on Debian, is version 1.15.8.11 which was released in April of 2011. &amp;lt;ref name=&amp;quot;dpkg&amp;quot;&amp;gt;[http://packages.qa.debian.org/d/dpkg.html Dpkg Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of dpkg included with Privatix at 3 months behind the latest stable version.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This package was included since dpkg is the package management system of Debian, the distribution that Privatix is based off of.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;Web Browser&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;IceWeasel&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;3.5.16-6&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;None Provided&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of IceWeasel was released in March 2011, making it 9 months old. &amp;lt;ref name=&amp;quot;iceweasel&amp;quot;&amp;gt;[http://packages.qa.debian.org/i/iceweasel.html IceWeasel Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version is version 3.5.16-11 which was released in November 2011. &amp;lt;ref name=&amp;quot;iceweasel&amp;quot;&amp;gt;[http://packages.qa.debian.org/i/iceweasel.html IceWeasel Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of IceWeasel included with Privatix at 9 months behind the latest stable release.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		IceWeasel was included within this distribution as it is a more security conscious browser than more mainstream browsers such as Mozilla Firefox. IceWeasel, an older version of GNU IceCat (a rebranding of FireFox), comes equipped with security features not available by default in FireFox.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Tor&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;0.201029-1&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;https://www.torproject.org&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This version of TOR was released in January 2011, making it 11 months old. &amp;lt;ref name=&amp;quot;tor changelog&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/tor/0.2.1.29-1 TOR Changelog](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The latest stable release of TOR is version 0.2.1.30-1 which was released in July 2011. &amp;lt;ref name=&amp;quot;tor&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/tor TOR on LaunchPad](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This package was included to help increase security, anonymity and privacy while web browsing which is one of the main goals of the Privatix distribution. For more information on TOR, see the Basic Operation section of the report, under Anonymous Web Browsing. &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;TOR Button (xul-ext-torbutton)&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;1.2.5-3&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;https://www.torproject.org/torbutton/&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This version of TOR Button was released in October 2010, making it just over a year old.&amp;lt;ref name=&amp;quot;torbutton&amp;quot;&amp;gt;[https://www.torproject.org/torbutton/ TORButton](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version of this program is version 1.4.4.1 which was released last month. &amp;lt;ref name=&amp;quot;torbutton&amp;quot;&amp;gt;[https://www.torproject.org/torbutton/ TORButton](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of TORButton included with Privatix at about a year behind the latest stable release.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This package was included in order to add to the functionality of TOR. This add-on allows the user to enable and disable TOR with the push of a button, located in the corner of their browser. &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Email&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;icedove&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;3.0.11-1+s&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;None Provided&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of IceDove is the current stable version as listed on Debian. &amp;lt;ref name=&amp;quot;icedove debian&amp;quot;&amp;gt;[http://packages.qadebian.org/i/icedove.html IceDove Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, this version was included within Privatix before it was made stable.  It was released as an unstable version in December 2010 and was later released as the current stable version in October 2011. &amp;lt;ref name=&amp;quot;icedove debian&amp;quot;&amp;gt;[http://packages.qadebian.org/i/icedove.html IceDove Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This email client was included due to the fact that it is a more security-conscious email client, providing government-grade security features, than others such as the regular version of ThunderBird. For more information on this program, refer to the Basic Operation section of this report under Secure Email.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Other&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;pidgin&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;2.7.3.1+sq&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://www.pidgin.im&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of the Pidgin was released in October 2010, and is also the current stable version of Pidgin as listed on Debian. &amp;lt;ref name=&amp;quot;pidgin&amp;quot;&amp;gt;[http://packages.qa.debian.org/p/pidgin.html Pidgin Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This package was included as Pidgin is the default IM client included with the standard install of Debian, the system on which Privatix is based.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Initialization==&lt;br /&gt;
&lt;br /&gt;
Privatix generally follows the same initialization process as Debian. Privatix initializes by first executing first the BIOS then the boot loader code. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; Privatix uses the same boot loader as Debian which is System V initialization. /etc/inittab is the configuration file, with the /sbin/init program initializing the system following the description in this configuration file. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; inittab will set the default run level of Privatix, which is run level 2. Following this, all the scripts located in /etc/rc2.d are executed alphabetically. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; These scripts are:&lt;br /&gt;
&lt;br /&gt;
[[File:pstree1.png|thumb|right|Process tree after start up]]&lt;br /&gt;
[[File:pstree_.png|thumb|right|Process tree after start up cont.]]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;S01polipo&#039;&#039;&#039;: polipo web cache--a small and fast caching web proxy&lt;br /&gt;
* &#039;&#039;&#039;S01rsyslog&#039;&#039;&#039;: enhanced multi-thread syslogd which is Linux system logging utility&lt;br /&gt;
* &#039;&#039;&#039;S01sudo&#039;&#039;&#039;: provides sudo&lt;br /&gt;
* &#039;&#039;&#039;S02cron&#039;&#039;&#039;: starts the scheduler of the system&lt;br /&gt;
* &#039;&#039;&#039;S02dbus&#039;&#039;&#039;: utility to send messages between processes and applications&lt;br /&gt;
* &#039;&#039;&#039;S02rsync&#039;&#039;&#039;: opens rsync--a program that allows files to be copied to and from remote machines&lt;br /&gt;
* &#039;&#039;&#039;S02tor&#039;&#039;&#039;: starts TOR (for more information, see above)&lt;br /&gt;
* &#039;&#039;&#039;S03avahi-daemon&#039;&#039;&#039;: starts the zeroconf daemon which is used for configuring the network automatically&lt;br /&gt;
* &#039;&#039;&#039;S03bluetooth&#039;&#039;&#039;: launches bluetooth&lt;br /&gt;
* &#039;&#039;&#039;S03networ-manager&#039;&#039;&#039;: starts a daemon that automatically switches network connections to the best available connection&lt;br /&gt;
* &#039;&#039;&#039;S04openvpn&#039;&#039;&#039;: starts openvpn service--a generic vpn service&lt;br /&gt;
* &#039;&#039;&#039;S05gdm3&#039;&#039;&#039;: script for the GNOME display manager&lt;br /&gt;
* &#039;&#039;&#039;S06bootlogs&#039;&#039;&#039;: the log file handling to be done during bootup--mainly things that don&#039;t need to be done particularly early in the boot process&lt;br /&gt;
* &#039;&#039;&#039;S07rc.local&#039;&#039;&#039;: runs the /etc/rc.local file if it exists--by default this script does nothing, it is used only to exit&lt;br /&gt;
* &#039;&#039;&#039;S07rmologin&#039;&#039;&#039;: removes the /etc/nologin file as the last step in the boot process&lt;br /&gt;
* &#039;&#039;&#039;S07stop-bootlogd&#039;&#039;&#039;: runs the /etc/rc.local file again, if it exists--by default this script does nothing, it is used only to exit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Following this, the system is initialized. The processes running on the newly initialized system and what initializes them are as follows: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot;&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;20%&amp;quot;&amp;gt; &amp;lt;b&amp;gt;Process Name&amp;lt;/b&amp;gt; &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;60%&amp;quot;&amp;gt; &amp;lt;b&amp;gt;Description&amp;lt;/b&amp;gt; &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;20%&amp;quot;&amp;gt; &amp;lt;b&amp;gt;Initialized By&amp;lt;/b&amp;gt; &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; NetworkManager &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Daemon that automatically switches network connections to the best available connection  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S03networ-manager init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; avahi-daemon &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; zeroconf daemon which is used for configuring the network automatically  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S03avahi-daemon init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; bluetoothd &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Enables bluetooth to be used &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S03bluetooth init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; cron &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Scheduler of Debian systems  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S02cron init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; dbus-launch &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Utility to send messages between processes and applications  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S02dbus init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gdm3 &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td rowspan=&amp;quot;4&amp;quot;&amp;gt; GNOME display manager &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td rowspan=&amp;quot;4&amp;quot;&amp;gt; S05gdm3 init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gnome-screensav &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gnome-settings- &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gnome-terminal &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; polipo &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; polipo web cache--a small and fast caching web proxy &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S01polipo init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; polkitd &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Polkit is an application level toolkit to allow unprivileged and privileged processes talk to one another. &amp;lt;ref name=&amp;quot;polkit&amp;quot;&amp;gt;[http://live.gnome.org/Seahorse Polkit](Last accessed 12-18-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; tor &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; TOR is an open source project meant to provide  anonymity online--mainly preventing anyone from learning your location or browsing habits--by routing webpage requests through virtual tunnels made up of individual TOR nodes. Since no two &amp;quot;paths&amp;quot; for a request are ever the same there is no way for your traffic to be monitored. &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S02tor init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We found this information by first confirming that Privatix used the same style of initializing as Debian. Once we ascertained this, we researched the Debian boot process. Privatix followed the same steps up until the loading of the scripts, which had some scripts that differed from Debian (e.g. TOR). Following this, we researched each of the scripts run on Privatix&#039;s default boot level of 2. The scripts are listed above, in the order they execute. To find the purpose of each of the scripts and what programs they opened, we manually went through each of the scripts. To find the processes running on the newly initialized system, we used the command &amp;quot;ps tree&amp;quot;. Once we had a list of the running processes, we researched the purpose of each process. To find how each process was initialized, we manually searched through the initialization scripts and matched each process with their initialization script.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_Privatix&amp;diff=16169</id>
		<title>COMP 3000 2011 Report: Privatix</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_Privatix&amp;diff=16169"/>
		<updated>2011-12-19T02:35:16Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Initialization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Part 1=&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The name of our chosen distribution is the Privatix Live-System. The target audience for this system are people that are concerned about privacy, anonymity and security when web-surfing, transporting/editing sensitive data, sending email etc. Therefore, the goals of this distribution are mainly security and privacy related which means being able to provide security-conscious tools and applications integrated into a portable Operating System (OS) for anyone to use at any time. The distribution is meant to be portable, coming in the form of a live Compact Diak (CD) which can be installed on an external device or a Universal Serial Bus (USB) flash drive with an encrypted password to ensure that all your data remains private, even if your external device is lost or compromised. It should be noted that the live CD is only meant for installing the OS onto a USB in order to provide a portable, privacy conscious OS. The user should not solely rely on the live CD as the OS does not yet implement password protection. This is due to the fact that there are no user accounts on the live CD, user accounts are only implemented once the full OS is installed on a USB. The Privatix Live-System incorporates many security-conscious tools for safe editing, carrying sensitive data, encrypted communication and anonymous web surfing such as built in software to encrypt external devices, IceWeasel and TOR. &amp;lt;ref name=&amp;quot;privatix home&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/index.html.en Privatix home page](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This Privatix Live-System was developed in Germany by Markus Mandalka. It may be obtained by going to Markus Mandalka&#039;s website and navigating to the download page ([http://www.mandalka.name/privatix/download.html.en Mandalka]), selecting the version you wish to download (we chose the English version) and downloading it. The approximate size of the Privatix Live-System is 838 Megabytes (MB) for the full English version (there are smaller versions available which have had features such as GNOME removed).&amp;lt;ref name=&amp;quot;Privatix download page&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/download.html.en Privatix download page](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;. The Privatix Live-System was based off of Debian ([http://distrowatch.com/table.php?distribution=debian Debian]).&lt;br /&gt;
&lt;br /&gt;
==Installation/Startup==&lt;br /&gt;
[[File:PrivatixBoot.png|thumb|right|Privatix boot screen]]&lt;br /&gt;
[[File:PrivatixDesktop.png|thumb|right|Privatix desktop]]&lt;br /&gt;
Currently we have Privatix installed on an 8 Gigabyte (GB) USB stick in order to utilize the full power of the OS. However, Privatix can be used in a few ways other than installing it on an external device such as on a live CD/Digital Video Disk (DVD), or in a virtualized environment such as VirtualBox. It should be noted, however, that the full potential of the OS is only unlocked once the OS has been installed on an external device as it was meant to be. One main flaw in using either a virtual environment or the live CD is that user accounts, and hence password protection, are not implemented until the OS has been installed on an external device.&lt;br /&gt;
&lt;br /&gt;
To install the Privatix-Live System, the user must first download the .iso from the download page ([http://www.mandalka.name/privatix/download.html.en Mandalka]). Once the .iso file is downloaded, it is possible to either burn the operating system to a CD/DVD, use VirtualBox, or install it to a USB stick. &lt;br /&gt;
 &lt;br /&gt;
===CD/DVD===&lt;br /&gt;
	To install and boot Privatix with a CD/DVD, simply burn the operating system to a disk and boot from the CD/DVD created when prompted to in the BIOS.  While using the Live CD, the user will have access to almost all features of the operating system.  However, because no profiles were setup, if the user locks the computer, there will be no way to unlock it as no password was not setup.  Note that the main purpose of the live CD/DVD is to install the OS on an external device.&lt;br /&gt;
&lt;br /&gt;
===VirtualBox===&lt;br /&gt;
	Using VirtualBox requires simply having VirtualBox installed, and when prompted for the installation media to select the .iso file downloaded for Privatix.  &lt;br /&gt;
&lt;br /&gt;
When the system starts up select the Live option.  This brings up the main Desktop, while using VirtualBox the user will have access to all features available when using Privatix with the Live CD.  However there is one small extra level of security, this is provided by the host operating system.  this extra layer of security takes the form of the profile system of the host operating system.&lt;br /&gt;
&lt;br /&gt;
===USB===&lt;br /&gt;
	To install Privatix onto a USB stick, you first must be booted into Privatix Live through a CD/DVD.  Then you need to click the install icon on the Desktop to begin installing to a device.  It is then possible to select a device for Privatix to install itself on.  The installer will ask you if you would like to fill your device with blank data, this makes accessing data/recovering what was originally on the device much harder.  The installer will prompt you for a user password, as well as an admin password.  The installer will then start it&#039;s time consuming process of installing Privatix to the device.&lt;br /&gt;
&lt;br /&gt;
To boot into Privatix from the device, you can stop the computer booting, and then boot into the external device.  During booting, Privatix will prompt you for your the password set up during the installation.&lt;br /&gt;
&lt;br /&gt;
==Basic Operation==&lt;br /&gt;
&lt;br /&gt;
===On An External Device===&lt;br /&gt;
&lt;br /&gt;
The main way of utilizing the Privatix Live-System is done by installing the system on an external device.  In our case, we used an 8 GB USB stick. When the system is installed on an external device, it is easy to use the system for its intended purpose--having portable anonymous and secure system. We tested this portable version of the system on several laptops with no trouble and no noticeable discretion in use between the different machines. We attempted to use the the system for the following use cases: anonymous web browsing, secure email, data encryption and secure data transportation. &lt;br /&gt;
&lt;br /&gt;
Apart from this, Privatix also came with OpenOffice applications for editing all types of data and much of the basic GNOME functionality, including (but not limited to):&lt;br /&gt;
* Pidgin IM and Empathy IM Client for instant messaging&lt;br /&gt;
* Evolution Mail for sending and retrieving email&lt;br /&gt;
* gedit for text editing&lt;br /&gt;
&lt;br /&gt;
====Anonymous Web Browsing====&lt;br /&gt;
[[File:TOR.png|thumb|right|TOR is enabled by default in Privatix]]&lt;br /&gt;
The main thing we liked about this system was the secure and anonymous web browsing. The default browser in the system is IceWeasal (an older version of GNU IceCat--a re-branding of FireFox compatible with both Linux and Mac systems) which comes equipped with security features not available by default in FireFox. The main add on that I liked was that The Onion Router (TOR) is installed and enabled by default (it can be disabled if the user wishes). TOR is an open source project meant to provide absolute anonymity online--mainly preventing anyone from learning your location or browsing habits--by routing webpage requests through virtual tunnels made up of individual TOR nodes. Since no two &amp;quot;paths&amp;quot; for a request are ever the same there is no way for your traffic to be monitored. &amp;lt;ref name=&amp;quot;TOR Project - About&amp;quot;&amp;gt;[https://www.torproject.org/about/overview.html.en TOR Project - About](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Secure Email====&lt;br /&gt;
&lt;br /&gt;
The Privatix Live-System also came equipped with the security-conscious email client IceDove--an unbranded ThunderBird mail client (a cross-platform email client that provides government-grade security features). The email client was easily setup and used, supporting digital signing and message encryption via certificates by default (as with TOR, this could be disabled if the user wished). &amp;lt;ref name=&amp;quot;icedove&amp;quot;&amp;gt;[http://packages.debian.org/sid/icedove IceDove](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Encryption====&lt;br /&gt;
[[File:Encrypt.jpg|thumb|right|Software to encrypt external device]]&lt;br /&gt;
The Privatix Live-System also has the ability to encrypt external devices (besides the external device that the system is installed on). This meant that we could have an unlimited amount of encrypted data, not being limited to the size of the external device that the system itself is installed on. The ability to encrypt secondary external devices is very handy as much of the space on the external device that Privatix is installed on is taken up by the system itself, especially if one fills the device with blank decoy data on installation. The encryption software was easily used, well designed and was able to be utilized by absolute beginners of the system.&lt;br /&gt;
&lt;br /&gt;
====Secure Data Transportation====&lt;br /&gt;
There are two ways that Privatix fulfills its secure data transportation goal:&lt;br /&gt;
# When saving data on the external device with the Privatix Live-System, the data is automatically encrypted and is also password protected (since the portable version of Privatix requires a password to use it). &amp;lt;ref name=&amp;quot;Privatix FAQ&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/index.html.en Privatix FAQ](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
# As mentioned above, Privatix allows for the encryption of secondary external devices, hence meaning that data can be securely transported without even having the Privatix Live-System with you.&lt;br /&gt;
&lt;br /&gt;
====General Use====&lt;br /&gt;
&lt;br /&gt;
Even with the additional security features not available in other distributions, Privatix would still be a very desirable live system to use. It is portable, especially once installed on an external device, and easily used with little bloatware. The default applications such as OpenOffice for data editing, Pidgin for instant messaging, various graphics editors, video player, and CD burner/extractor ensured that they system was still perfectly functional for everyday use, even with security, not intense functionality, being the main focus.&lt;br /&gt;
&lt;br /&gt;
===Live CD and Virtual Box===&lt;br /&gt;
&lt;br /&gt;
We found that running Privatix using the live CD and VirtualBox was equivalent. &lt;br /&gt;
&lt;br /&gt;
When booting the live CD in VirtualBox, there are certain key features of the Privatix Live-System you are missing (mainly because these features are meant for the portable version to be installed on an external device). However, just booting from the live CD still gives a lot of the functionality I would use the system for--mainly the anonymous web browsing, secure email and data encryption. The key differences were the lack of portability and the inability to save any data on the live CD or VirtualBox environment.&lt;br /&gt;
&lt;br /&gt;
When using only the live CD or VirtualBox all files are deleted when the system is shut down. In addition to, any files saved to the desktop by the user will not appear.  They will be hidden from view, but can be viewed by opening the terminal and navigating to the desktop and running the ls command.&lt;br /&gt;
&lt;br /&gt;
The main flaw in only using the system in these mediums is that the added protection of having a user account and password to access the system are no present on the live CD or when using Privatix in a virtual machine. This is due to the fact that Privatix does not implement user accounts and password protection until it has been fully installed onto an external device. As the main goal of this distribution is privacy, it would be highly recommended that the user fully install the OS onto an external device for the added security of password protection.&lt;br /&gt;
&lt;br /&gt;
==Usage Evaluation==&lt;br /&gt;
&lt;br /&gt;
During our use of Privatix, we found it preformed on par for what it was described as, a secure and portable system. The tools provided to encrypt data and the secure browser with add-ons for anonymity especially supported this belief. However we also found some parts of the distribution that were a cause for concern.  To begin with, there was a slight language barrier as the system was originally written in German. This was made apparent by the frequent grammar mistakes in both the existing English documentation and the operating system itself indicating that English was not the primary language for the writers of this operating system. Most of the documentation for the operating system is also in German. Those who maintain Privatix and its project website are in the process of translating all their documentation as to be available in both English and German, though currently most of the supporting documentation and FAQ are in German. This made it hard to troubleshoot anything that went wrong with the system during installation or use. &lt;br /&gt;
&lt;br /&gt;
We also noticed that there was no wireless drivers on either portable versions of the OS (installed on an external device or simply using the Live CD or booting up in a virtual machine) so wireless networks could not be connected to.  This causes a problem because an operating system on a USB stick should be completely portable, however this driver requires you to have a hard line to use the Internet. It was also noticed that when using Privatix in VirtualBox that even though there was no wireless drivers in Privatix, the wireless capability was provided by the host OS (Windows).&lt;br /&gt;
&lt;br /&gt;
Lastly, when we tried to install Privatix onto a USB it took several attempts. We discovered that to avoid many of the problems we encountered, it is better to use a larger (preferably at least 8GBs) external device for installation and to defer from filling the external device with blank decoy data during installation on an external device.&lt;br /&gt;
&lt;br /&gt;
However once connected to the Internet, all software seems to work as it should. The more basic applications such as OpenOffice, the instant messaging and email clients, multimedia applications etc. function with no problems encountered, working much as they do in any other Linux distribution. The security tools also seem to work as they should. However since we do not know how to test the limits of its security measures we do not know for sure how secure these programs actually are. Overall, Privatix seems to be a very functional and portable distribution, allowing users access to standard applications for tasks such as editing and transporting data, sending/receiving email, instant messaging and multimedia applications with the added bonus of being completely secure and anonymous.&lt;br /&gt;
&lt;br /&gt;
=Part 2=&lt;br /&gt;
==Software Packaging==&lt;br /&gt;
[[File:dpkg_out.png|thumb|right|Package listing, dpkg]]&lt;br /&gt;
[[File:aptitude_out.png|thumb|right|Package listing, aptitude]]&lt;br /&gt;
The packaging format that was used for the Privatix-Live System was DEB (based on the Debian packaging format). &amp;lt;ref name=&amp;quot;privatix distrowatch&amp;quot;&amp;gt;[http://distrowatch.com/table.php?distribution=privatix Privatix Distrowatch Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The utilities used with this packaging format were dpkg and aptitude. Dpkg is used as the operating system&#039;s package management utility, with aptitude acting as the more user-friendly front end version. Aptitude made finding a list of installed packages quite easy. Aptitude allows you to see a full list of installed packages, with the packages being segregated into categories such as mail, web, shells and utils. As well as using aptitude, the command line can be used to access a list of installed packages. To do this, input the following in terminal and a list of all installed packages is generated. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -l&lt;br /&gt;
&lt;br /&gt;
Though knowing how to do this in command line is useful, we found that using aptitude was generally better as the packages are segregated into categories which made viewing the list of installed packages more simple. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To add a package within Privatix, we found the easiest way was to use the one of the following commands provided by dpkg:&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -i &amp;lt;package name&amp;gt;&lt;br /&gt;
          or &lt;br /&gt;
 $ dpkg --install &amp;lt;package name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These commands function in the same way which means they will either install a package, or upgrade already installed versions of the package. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To remove a  package within Privatix, we found the easist way was to use either of the following commands provided by dpkg:&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -r &amp;lt;package name&amp;gt;&lt;br /&gt;
          or &lt;br /&gt;
 $ dpkg -P &amp;lt;package name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When using &amp;quot;dpkg -r &amp;lt;package name&amp;gt;&amp;quot;, everything related to the package &#039;&#039;except&#039;&#039; the configuration files are removed. To fully remove a package, however, we used &amp;quot;dpkg -P &amp;lt;package name&amp;gt;&amp;quot; which removes the entire package, including the configuration files. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We found that the software catalog for this distribution was quite extensive, especially since this distribution is meant to be portable. Privatix includes all the standard packages included with Debian (e.g. libc), as well as several other utilities meant to increase security and privacy while using the system such as IceDove, TOR and TORButton.&lt;br /&gt;
&lt;br /&gt;
==Major Package Versions==&lt;br /&gt;
&lt;br /&gt;
For this section of the report, we needed to determine how heavily modified by the distribution&#039;s author packages included within our distribution were. However, the distribution&#039;s author has stated that everything included is mainly based on Debian. &amp;lt;ref name=&amp;quot;privatix documentation&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/doc.html Privatix Documentation (German)](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The packages within Privatix have not been modified, the distribution&#039;s author has mainly brought together several security and privacy conscious utilities into one distribution for portable and daily use. As such, many of the packages that come with the standard install of Privatix have been included since they are included with the standard install of Debian at the time this distribution was made. Please also note that this reference was taken from the main page of the distribution but that, to view it, you will need to translate it (we used Google translate) as much of the documentation for this distribution is in German. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr valign=&amp;quot;top&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;10%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Category&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;15%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Package&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;10%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Version&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;15%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Upstream Source&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;20%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Vintage&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;30%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Package Details&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;Kernel&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt;linux-base&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;2.6.32-31 &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;None Provided&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
		This version of the kernel was released in December of 2009, making it just under two years old. &amp;lt;ref name=&amp;quot;linux kernel&amp;quot;&amp;gt;[http://kernelnewbies.org/Linux_2_6_32 Linux Kernel v2.6.32 Info Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version of the Linux kernel was released just yesterday (11/11/2011), this version being listed as 3.1.1. &amp;lt;ref name=&amp;quot;current kernel&amp;quot;&amp;gt;[http://www.kernel.org/ Current Stable Linux Kernel](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This puts the version of the Linux kernel on Privatix as being two years behind the current stable version of the Linux kernel.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
		We believe that these packages were included within the distribution as they are the standard packages for the Linux kernel included in the standard install of Debian at the time Privatix was released.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;linux-image-2.6.32-5-686&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;2.6.32-31&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;linux-image-2.6-282&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;2.6.32+39&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;libc&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;libc-bin&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;2.11.2-10&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;http://www.eglibc.org&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
		This version of libc was released in January 2011, making it approximately 11 months old.&amp;lt;ref name=&amp;quot;eglibc&amp;quot;&amp;gt;[http://packages.qa.debian.org/e/eglibc.html eglibc Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This version is also the current stable version of libc, as listed on Debian. &amp;lt;ref name=&amp;quot;eglibc&amp;quot;&amp;gt;[http://packages.qa.debian.org/e/eglibc.html eglibc Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, a newer, unstable version (version 2.13-21), is currently undergoing testing.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;This package was included as it is also included in the standard install of Debian as well as that all Linux-based systems come with a version of libc.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;libc6&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Shell&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;bash&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;4.1-3&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://tiswww.case.edu/php/chet/bash/bashtop.html&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This version of bash was released in, approximately, April 2010. &amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; It is also the current stable version of bash. &amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, last month, version 4.2 of bash was pushed into testing and became the current experimental version.&amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This package was included as bash is the version of command line included with the standard install of Debian.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Utilities&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;busybox&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;1:1.17.1-8&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://www.qtsoftware.com/&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This version of busybox was released in, approximately, November 2010. &amp;lt;ref name=&amp;quot;busybox&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/busybox.html busybox Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; It is also the current stable version of busybox as listed on Debian. &amp;lt;ref name=&amp;quot;busybox&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/busybox.html busybox Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This package was included as it the version of busybox included with the standard install of Debian.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
   &lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Software Packaging&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;dpkg&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;1.15.8.10&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://wiki.debian.org/Teams/Dpkg&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of dpkg was released in February of 2011, making it 10 months old. &amp;lt;ref name=&amp;quot;dpkg changelog&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/dpkg/1.15.8.10ubuntu1 Dpkg Changelog](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The current stable version of dpkg, as listed on Debian, is version 1.15.8.11 which was released in April of 2011. &amp;lt;ref name=&amp;quot;dpkg&amp;quot;&amp;gt;[http://packages.qa.debian.org/d/dpkg.html Dpkg Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of dpkg included with Privatix at 3 months behind the latest stable version.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This package was included since dpkg is the package management system of Debian, the distribution that Privatix is based off of.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;Web Browser&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;IceWeasel&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;3.5.16-6&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;None Provided&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of IceWeasel was released in March 2011, making it 9 months old. &amp;lt;ref name=&amp;quot;iceweasel&amp;quot;&amp;gt;[http://packages.qa.debian.org/i/iceweasel.html IceWeasel Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version is version 3.5.16-11 which was released in November 2011. &amp;lt;ref name=&amp;quot;iceweasel&amp;quot;&amp;gt;[http://packages.qa.debian.org/i/iceweasel.html IceWeasel Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of IceWeasel included with Privatix at 9 months behind the latest stable release.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		IceWeasel was included within this distribution as it is a more security conscious browser than more mainstream browsers such as Mozilla Firefox. IceWeasel, an older version of GNU IceCat (a rebranding of FireFox), comes equipped with security features not available by default in FireFox.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Tor&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;0.201029-1&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;https://www.torproject.org&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This version of TOR was released in January 2011, making it 11 months old. &amp;lt;ref name=&amp;quot;tor changelog&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/tor/0.2.1.29-1 TOR Changelog](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The latest stable release of TOR is version 0.2.1.30-1 which was released in July 2011. &amp;lt;ref name=&amp;quot;tor&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/tor TOR on LaunchPad](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This package was included to help increase security, anonymity and privacy while web browsing which is one of the main goals of the Privatix distribution. For more information on TOR, see the Basic Operation section of the report, under Anonymous Web Browsing. &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;TOR Button (xul-ext-torbutton)&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;1.2.5-3&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;https://www.torproject.org/torbutton/&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This version of TOR Button was released in October 2010, making it just over a year old.&amp;lt;ref name=&amp;quot;torbutton&amp;quot;&amp;gt;[https://www.torproject.org/torbutton/ TORButton](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version of this program is version 1.4.4.1 which was released last month. &amp;lt;ref name=&amp;quot;torbutton&amp;quot;&amp;gt;[https://www.torproject.org/torbutton/ TORButton](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of TORButton included with Privatix at about a year behind the latest stable release.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This package was included in order to add to the functionality of TOR. This add-on allows the user to enable and disable TOR with the push of a button, located in the corner of their browser. &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Email&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;icedove&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;3.0.11-1+s&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;None Provided&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of IceDove is the current stable version as listed on Debian. &amp;lt;ref name=&amp;quot;icedove debian&amp;quot;&amp;gt;[http://packages.qadebian.org/i/icedove.html IceDove Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, this version was included within Privatix before it was made stable.  It was released as an unstable version in December 2010 and was later released as the current stable version in October 2011. &amp;lt;ref name=&amp;quot;icedove debian&amp;quot;&amp;gt;[http://packages.qadebian.org/i/icedove.html IceDove Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This email client was included due to the fact that it is a more security-conscious email client, providing government-grade security features, than others such as the regular version of ThunderBird. For more information on this program, refer to the Basic Operation section of this report under Secure Email.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Other&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;pidgin&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;2.7.3.1+sq&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://www.pidgin.im&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of the Pidgin was released in October 2010, and is also the current stable version of Pidgin as listed on Debian. &amp;lt;ref name=&amp;quot;pidgin&amp;quot;&amp;gt;[http://packages.qa.debian.org/p/pidgin.html Pidgin Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This package was included as Pidgin is the default IM client included with the standard install of Debian, the system on which Privatix is based.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Initialization==&lt;br /&gt;
&lt;br /&gt;
Privatix generally follows the same initialization process as Debian. Privatix initializes by first executing first the BIOS then the boot loader code. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; Privatix uses the same boot loader as Debian which is System V initialization. /etc/inittab is the configuration file, with the /sbin/init program initializing the system following the description in this configuration file. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; inittab will set the default run level of Privatix, which is run level 2. Following this, all the scripts located in /etc/rc2.d are executed alphabetically. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; These scripts are:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;S01polipo&#039;&#039;&#039;: polipo web cache--a small and fast caching web proxy&lt;br /&gt;
* &#039;&#039;&#039;S01rsyslog&#039;&#039;&#039;: enhanced multi-thread syslogd which is Linux system logging utility&lt;br /&gt;
* &#039;&#039;&#039;S01sudo&#039;&#039;&#039;: provides sudo&lt;br /&gt;
* &#039;&#039;&#039;S02cron&#039;&#039;&#039;: starts the scheduler of the system&lt;br /&gt;
* &#039;&#039;&#039;S02dbus&#039;&#039;&#039;: utility to send messages between processes and applications&lt;br /&gt;
* &#039;&#039;&#039;S02rsync&#039;&#039;&#039;: opens rsync--a program that allows files to be copied to and from remote machines&lt;br /&gt;
* &#039;&#039;&#039;S02tor&#039;&#039;&#039;: starts TOR (for more information, see above)&lt;br /&gt;
* &#039;&#039;&#039;S03avahi-daemon&#039;&#039;&#039;: starts the zeroconf daemon which is used for configuring the network automatically&lt;br /&gt;
* &#039;&#039;&#039;S03bluetooth&#039;&#039;&#039;: launches bluetooth&lt;br /&gt;
* &#039;&#039;&#039;S03networ-manager&#039;&#039;&#039;: starts a daemon that automatically switches network connections to the best available connection&lt;br /&gt;
* &#039;&#039;&#039;S04openvpn&#039;&#039;&#039;: starts openvpn service--a generic vpn service&lt;br /&gt;
* &#039;&#039;&#039;S05gdm3&#039;&#039;&#039;: script for the GNOME display manager&lt;br /&gt;
* &#039;&#039;&#039;S06bootlogs&#039;&#039;&#039;: the log file handling to be done during bootup--mainly things that don&#039;t need to be done particularly early in the boot process&lt;br /&gt;
* &#039;&#039;&#039;S07rc.local&#039;&#039;&#039;: runs the /etc/rc.local file if it exists--by default this script does nothing, it is used only to exit&lt;br /&gt;
* &#039;&#039;&#039;S07rmologin&#039;&#039;&#039;: removes the /etc/nologin file as the last step in the boot process&lt;br /&gt;
* &#039;&#039;&#039;S07stop-bootlogd&#039;&#039;&#039;: runs the /etc/rc.local file again, if it exists--by default this script does nothing, it is used only to exit&lt;br /&gt;
&lt;br /&gt;
[[File:pstree1.png|thumb|right|Process tree after start up]]&lt;br /&gt;
[[File:pstree_.png|thumb|right|Process tree after start up cont.]]&lt;br /&gt;
&lt;br /&gt;
Following this, the system is initialized. The processes running on the newly initialized system and what initializes them are as follows: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot;&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;20%&amp;quot;&amp;gt; &amp;lt;b&amp;gt;Process Name&amp;lt;/b&amp;gt; &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;60%&amp;quot;&amp;gt; &amp;lt;b&amp;gt;Description&amp;lt;/b&amp;gt; &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;20%&amp;quot;&amp;gt; &amp;lt;b&amp;gt;Initialized By&amp;lt;/b&amp;gt; &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; NetworkManager &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Daemon that automatically switches network connections to the best available connection  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S03networ-manager init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; avahi-daemon &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; zeroconf daemon which is used for configuring the network automatically  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S03avahi-daemon init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; bluetoothd &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Enables bluetooth to be used &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S03bluetooth init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; bonobo-activati &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Responsible for the activation of CORBA objects, allowing the browsing of available CORBA servers on your system (running or not), and keeping track of the running servers so one can&#039;t restart an already running server, merely reuse it. &amp;lt;ref name=&amp;quot;Bonobo Activation Tutorial&amp;quot;&amp;gt;[http://developer.gnome.org/bonobo-activation/stable/tutorial.html#id2719173 Bonobo Activation Tutorial](Last accessed 12-18-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; cron &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Scheduler of Debian systems  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S02cron init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; dbus-launch &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Utility to send messages between processes and applications  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S02dbus init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gconfd-2 &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; D &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gdm3 &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td rowspan=&amp;quot;4&amp;quot;&amp;gt; GNOME display manager &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td rowspan=&amp;quot;4&amp;quot;&amp;gt; S05gdm3 init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gnome-screensav &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gnome-settings- &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gnome-terminal &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfs-afc-volume &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td rowspan=&amp;quot;7&amp;quot;&amp;gt; User-space virtual file system.  In gvfs mounts are run as separate processes that which the user can talk to using D-bus. &amp;lt;ref name=&amp;quot;gvfs&amp;quot;&amp;gt;[http://packages.debian.org/sid/gvfs Gvfs Description](Last accessed 12-18-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td rowspan=&amp;quot;7&amp;quot;&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfs-gdu-volume &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfs-gphoto2-vo &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfsd &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfsd-burn &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfsd-metadata &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfsd-trash &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; login &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; D &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; mixer_applet2 &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; D &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; polipo &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; polipo web cache--a small and fast caching web proxy &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S01polipo init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; polkitd &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Polkit is an application level toolkit to allow unprivileged and privileged processes talk to one another. &amp;lt;ref name=&amp;quot;polkit&amp;quot;&amp;gt;[http://live.gnome.org/Seahorse Polkit](Last accessed 12-18-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; seahorse-applet &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; seahorse is an applet to manage encryptions.  Has puguins for nautilus and gedit. &amp;lt;ref name=&amp;quot;seahorse&amp;quot;&amp;gt;[http://live.gnome.org/Seahorse Seahorse-Applet](Last accessed 12-18-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; tor &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; TOR is an open source project meant to provide  anonymity online--mainly preventing anyone from learning your location or browsing habits--by routing webpage requests through virtual tunnels made up of individual TOR nodes. Since no two &amp;quot;paths&amp;quot; for a request are ever the same there is no way for your traffic to be monitored. &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S02tor init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; udisks-daemon &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; udisks-daemon is a block device interface, using D-bus.  The udisks-daemon allows for querying, mounting, unmounting, and formatting of external devices such as USB drives.  It also allows for the creation and modification.&amp;lt;ref name=&amp;quot;udisks-daemon&amp;quot;&amp;gt;[http://packages.debian.org/sid/udisks Udisks-daemon Description](Last accessed 12-18-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We found this information by first confirming that Privatix used the same style of initializing as Debian. Once we ascertained this, we researched the Debian boot process. Privatix followed the same steps up until the loading of the scripts, which had some scripts that differed from Debian (e.g. TOR). Following this, we researched each of the scripts run on Privatix&#039;s default boot level of 2. The scripts are listed above, in the order they execute. To find the purpose of each of the scripts and what programs they opened, we manually went through each of the scripts. To find the processes running on the newly initialized system, we used the command &amp;quot;ps tree&amp;quot;. Once we had a list of the running processes, we researched the purpose of each process. To find how each process was initialized, we manually searched through the initialization scripts and matched each process with their initialization script.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_Privatix&amp;diff=16161</id>
		<title>COMP 3000 2011 Report: Privatix</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_Privatix&amp;diff=16161"/>
		<updated>2011-12-19T02:30:12Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Initialization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Part 1=&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The name of our chosen distribution is the Privatix Live-System. The target audience for this system are people that are concerned about privacy, anonymity and security when web-surfing, transporting/editing sensitive data, sending email etc. Therefore, the goals of this distribution are mainly security and privacy related which means being able to provide security-conscious tools and applications integrated into a portable Operating System (OS) for anyone to use at any time. The distribution is meant to be portable, coming in the form of a live Compact Diak (CD) which can be installed on an external device or a Universal Serial Bus (USB) flash drive with an encrypted password to ensure that all your data remains private, even if your external device is lost or compromised. It should be noted that the live CD is only meant for installing the OS onto a USB in order to provide a portable, privacy conscious OS. The user should not solely rely on the live CD as the OS does not yet implement password protection. This is due to the fact that there are no user accounts on the live CD, user accounts are only implemented once the full OS is installed on a USB. The Privatix Live-System incorporates many security-conscious tools for safe editing, carrying sensitive data, encrypted communication and anonymous web surfing such as built in software to encrypt external devices, IceWeasel and TOR. &amp;lt;ref name=&amp;quot;privatix home&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/index.html.en Privatix home page](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This Privatix Live-System was developed in Germany by Markus Mandalka. It may be obtained by going to Markus Mandalka&#039;s website and navigating to the download page ([http://www.mandalka.name/privatix/download.html.en Mandalka]), selecting the version you wish to download (we chose the English version) and downloading it. The approximate size of the Privatix Live-System is 838 Megabytes (MB) for the full English version (there are smaller versions available which have had features such as GNOME removed).&amp;lt;ref name=&amp;quot;Privatix download page&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/download.html.en Privatix download page](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;. The Privatix Live-System was based off of Debian ([http://distrowatch.com/table.php?distribution=debian Debian]).&lt;br /&gt;
&lt;br /&gt;
==Installation/Startup==&lt;br /&gt;
[[File:PrivatixBoot.png|thumb|right|Privatix boot screen]]&lt;br /&gt;
[[File:PrivatixDesktop.png|thumb|right|Privatix desktop]]&lt;br /&gt;
Currently we have Privatix installed on an 8 Gigabyte (GB) USB stick in order to utilize the full power of the OS. However, Privatix can be used in a few ways other than installing it on an external device such as on a live CD/Digital Video Disk (DVD), or in a virtualized environment such as VirtualBox. It should be noted, however, that the full potential of the OS is only unlocked once the OS has been installed on an external device as it was meant to be. One main flaw in using either a virtual environment or the live CD is that user accounts, and hence password protection, are not implemented until the OS has been installed on an external device.&lt;br /&gt;
&lt;br /&gt;
To install the Privatix-Live System, the user must first download the .iso from the download page ([http://www.mandalka.name/privatix/download.html.en Mandalka]). Once the .iso file is downloaded, it is possible to either burn the operating system to a CD/DVD, use VirtualBox, or install it to a USB stick. &lt;br /&gt;
 &lt;br /&gt;
===CD/DVD===&lt;br /&gt;
	To install and boot Privatix with a CD/DVD, simply burn the operating system to a disk and boot from the CD/DVD created when prompted to in the BIOS.  While using the Live CD, the user will have access to almost all features of the operating system.  However, because no profiles were setup, if the user locks the computer, there will be no way to unlock it as no password was not setup.  Note that the main purpose of the live CD/DVD is to install the OS on an external device.&lt;br /&gt;
&lt;br /&gt;
===VirtualBox===&lt;br /&gt;
	Using VirtualBox requires simply having VirtualBox installed, and when prompted for the installation media to select the .iso file downloaded for Privatix.  &lt;br /&gt;
&lt;br /&gt;
When the system starts up select the Live option.  This brings up the main Desktop, while using VirtualBox the user will have access to all features available when using Privatix with the Live CD.  However there is one small extra level of security, this is provided by the host operating system.  this extra layer of security takes the form of the profile system of the host operating system.&lt;br /&gt;
&lt;br /&gt;
===USB===&lt;br /&gt;
	To install Privatix onto a USB stick, you first must be booted into Privatix Live through a CD/DVD.  Then you need to click the install icon on the Desktop to begin installing to a device.  It is then possible to select a device for Privatix to install itself on.  The installer will ask you if you would like to fill your device with blank data, this makes accessing data/recovering what was originally on the device much harder.  The installer will prompt you for a user password, as well as an admin password.  The installer will then start it&#039;s time consuming process of installing Privatix to the device.&lt;br /&gt;
&lt;br /&gt;
To boot into Privatix from the device, you can stop the computer booting, and then boot into the external device.  During booting, Privatix will prompt you for your the password set up during the installation.&lt;br /&gt;
&lt;br /&gt;
==Basic Operation==&lt;br /&gt;
&lt;br /&gt;
===On An External Device===&lt;br /&gt;
&lt;br /&gt;
The main way of utilizing the Privatix Live-System is done by installing the system on an external device.  In our case, we used an 8 GB USB stick. When the system is installed on an external device, it is easy to use the system for its intended purpose--having portable anonymous and secure system. We tested this portable version of the system on several laptops with no trouble and no noticeable discretion in use between the different machines. We attempted to use the the system for the following use cases: anonymous web browsing, secure email, data encryption and secure data transportation. &lt;br /&gt;
&lt;br /&gt;
Apart from this, Privatix also came with OpenOffice applications for editing all types of data and much of the basic GNOME functionality, including (but not limited to):&lt;br /&gt;
* Pidgin IM and Empathy IM Client for instant messaging&lt;br /&gt;
* Evolution Mail for sending and retrieving email&lt;br /&gt;
* gedit for text editing&lt;br /&gt;
&lt;br /&gt;
====Anonymous Web Browsing====&lt;br /&gt;
[[File:TOR.png|thumb|right|TOR is enabled by default in Privatix]]&lt;br /&gt;
The main thing we liked about this system was the secure and anonymous web browsing. The default browser in the system is IceWeasal (an older version of GNU IceCat--a re-branding of FireFox compatible with both Linux and Mac systems) which comes equipped with security features not available by default in FireFox. The main add on that I liked was that The Onion Router (TOR) is installed and enabled by default (it can be disabled if the user wishes). TOR is an open source project meant to provide absolute anonymity online--mainly preventing anyone from learning your location or browsing habits--by routing webpage requests through virtual tunnels made up of individual TOR nodes. Since no two &amp;quot;paths&amp;quot; for a request are ever the same there is no way for your traffic to be monitored. &amp;lt;ref name=&amp;quot;TOR Project - About&amp;quot;&amp;gt;[https://www.torproject.org/about/overview.html.en TOR Project - About](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Secure Email====&lt;br /&gt;
&lt;br /&gt;
The Privatix Live-System also came equipped with the security-conscious email client IceDove--an unbranded ThunderBird mail client (a cross-platform email client that provides government-grade security features). The email client was easily setup and used, supporting digital signing and message encryption via certificates by default (as with TOR, this could be disabled if the user wished). &amp;lt;ref name=&amp;quot;icedove&amp;quot;&amp;gt;[http://packages.debian.org/sid/icedove IceDove](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Encryption====&lt;br /&gt;
[[File:Encrypt.jpg|thumb|right|Software to encrypt external device]]&lt;br /&gt;
The Privatix Live-System also has the ability to encrypt external devices (besides the external device that the system is installed on). This meant that we could have an unlimited amount of encrypted data, not being limited to the size of the external device that the system itself is installed on. The ability to encrypt secondary external devices is very handy as much of the space on the external device that Privatix is installed on is taken up by the system itself, especially if one fills the device with blank decoy data on installation. The encryption software was easily used, well designed and was able to be utilized by absolute beginners of the system.&lt;br /&gt;
&lt;br /&gt;
====Secure Data Transportation====&lt;br /&gt;
There are two ways that Privatix fulfills its secure data transportation goal:&lt;br /&gt;
# When saving data on the external device with the Privatix Live-System, the data is automatically encrypted and is also password protected (since the portable version of Privatix requires a password to use it). &amp;lt;ref name=&amp;quot;Privatix FAQ&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/index.html.en Privatix FAQ](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
# As mentioned above, Privatix allows for the encryption of secondary external devices, hence meaning that data can be securely transported without even having the Privatix Live-System with you.&lt;br /&gt;
&lt;br /&gt;
====General Use====&lt;br /&gt;
&lt;br /&gt;
Even with the additional security features not available in other distributions, Privatix would still be a very desirable live system to use. It is portable, especially once installed on an external device, and easily used with little bloatware. The default applications such as OpenOffice for data editing, Pidgin for instant messaging, various graphics editors, video player, and CD burner/extractor ensured that they system was still perfectly functional for everyday use, even with security, not intense functionality, being the main focus.&lt;br /&gt;
&lt;br /&gt;
===Live CD and Virtual Box===&lt;br /&gt;
&lt;br /&gt;
We found that running Privatix using the live CD and VirtualBox was equivalent. &lt;br /&gt;
&lt;br /&gt;
When booting the live CD in VirtualBox, there are certain key features of the Privatix Live-System you are missing (mainly because these features are meant for the portable version to be installed on an external device). However, just booting from the live CD still gives a lot of the functionality I would use the system for--mainly the anonymous web browsing, secure email and data encryption. The key differences were the lack of portability and the inability to save any data on the live CD or VirtualBox environment.&lt;br /&gt;
&lt;br /&gt;
When using only the live CD or VirtualBox all files are deleted when the system is shut down. In addition to, any files saved to the desktop by the user will not appear.  They will be hidden from view, but can be viewed by opening the terminal and navigating to the desktop and running the ls command.&lt;br /&gt;
&lt;br /&gt;
The main flaw in only using the system in these mediums is that the added protection of having a user account and password to access the system are no present on the live CD or when using Privatix in a virtual machine. This is due to the fact that Privatix does not implement user accounts and password protection until it has been fully installed onto an external device. As the main goal of this distribution is privacy, it would be highly recommended that the user fully install the OS onto an external device for the added security of password protection.&lt;br /&gt;
&lt;br /&gt;
==Usage Evaluation==&lt;br /&gt;
&lt;br /&gt;
During our use of Privatix, we found it preformed on par for what it was described as, a secure and portable system. The tools provided to encrypt data and the secure browser with add-ons for anonymity especially supported this belief. However we also found some parts of the distribution that were a cause for concern.  To begin with, there was a slight language barrier as the system was originally written in German. This was made apparent by the frequent grammar mistakes in both the existing English documentation and the operating system itself indicating that English was not the primary language for the writers of this operating system. Most of the documentation for the operating system is also in German. Those who maintain Privatix and its project website are in the process of translating all their documentation as to be available in both English and German, though currently most of the supporting documentation and FAQ are in German. This made it hard to troubleshoot anything that went wrong with the system during installation or use. &lt;br /&gt;
&lt;br /&gt;
We also noticed that there was no wireless drivers on either portable versions of the OS (installed on an external device or simply using the Live CD or booting up in a virtual machine) so wireless networks could not be connected to.  This causes a problem because an operating system on a USB stick should be completely portable, however this driver requires you to have a hard line to use the Internet. It was also noticed that when using Privatix in VirtualBox that even though there was no wireless drivers in Privatix, the wireless capability was provided by the host OS (Windows).&lt;br /&gt;
&lt;br /&gt;
Lastly, when we tried to install Privatix onto a USB it took several attempts. We discovered that to avoid many of the problems we encountered, it is better to use a larger (preferably at least 8GBs) external device for installation and to defer from filling the external device with blank decoy data during installation on an external device.&lt;br /&gt;
&lt;br /&gt;
However once connected to the Internet, all software seems to work as it should. The more basic applications such as OpenOffice, the instant messaging and email clients, multimedia applications etc. function with no problems encountered, working much as they do in any other Linux distribution. The security tools also seem to work as they should. However since we do not know how to test the limits of its security measures we do not know for sure how secure these programs actually are. Overall, Privatix seems to be a very functional and portable distribution, allowing users access to standard applications for tasks such as editing and transporting data, sending/receiving email, instant messaging and multimedia applications with the added bonus of being completely secure and anonymous.&lt;br /&gt;
&lt;br /&gt;
=Part 2=&lt;br /&gt;
==Software Packaging==&lt;br /&gt;
[[File:dpkg_out.png|thumb|right|Package listing, dpkg]]&lt;br /&gt;
[[File:aptitude_out.png|thumb|right|Package listing, aptitude]]&lt;br /&gt;
The packaging format that was used for the Privatix-Live System was DEB (based on the Debian packaging format). &amp;lt;ref name=&amp;quot;privatix distrowatch&amp;quot;&amp;gt;[http://distrowatch.com/table.php?distribution=privatix Privatix Distrowatch Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The utilities used with this packaging format were dpkg and aptitude. Dpkg is used as the operating system&#039;s package management utility, with aptitude acting as the more user-friendly front end version. Aptitude made finding a list of installed packages quite easy. Aptitude allows you to see a full list of installed packages, with the packages being segregated into categories such as mail, web, shells and utils. As well as using aptitude, the command line can be used to access a list of installed packages. To do this, input the following in terminal and a list of all installed packages is generated. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -l&lt;br /&gt;
&lt;br /&gt;
Though knowing how to do this in command line is useful, we found that using aptitude was generally better as the packages are segregated into categories which made viewing the list of installed packages more simple. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To add a package within Privatix, we found the easiest way was to use the one of the following commands provided by dpkg:&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -i &amp;lt;package name&amp;gt;&lt;br /&gt;
          or &lt;br /&gt;
 $ dpkg --install &amp;lt;package name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These commands function in the same way which means they will either install a package, or upgrade already installed versions of the package. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To remove a  package within Privatix, we found the easist way was to use either of the following commands provided by dpkg:&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -r &amp;lt;package name&amp;gt;&lt;br /&gt;
          or &lt;br /&gt;
 $ dpkg -P &amp;lt;package name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When using &amp;quot;dpkg -r &amp;lt;package name&amp;gt;&amp;quot;, everything related to the package &#039;&#039;except&#039;&#039; the configuration files are removed. To fully remove a package, however, we used &amp;quot;dpkg -P &amp;lt;package name&amp;gt;&amp;quot; which removes the entire package, including the configuration files. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We found that the software catalog for this distribution was quite extensive, especially since this distribution is meant to be portable. Privatix includes all the standard packages included with Debian (e.g. libc), as well as several other utilities meant to increase security and privacy while using the system such as IceDove, TOR and TORButton.&lt;br /&gt;
&lt;br /&gt;
==Major Package Versions==&lt;br /&gt;
&lt;br /&gt;
For this section of the report, we needed to determine how heavily modified by the distribution&#039;s author packages included within our distribution were. However, the distribution&#039;s author has stated that everything included is mainly based on Debian. &amp;lt;ref name=&amp;quot;privatix documentation&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/doc.html Privatix Documentation (German)](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The packages within Privatix have not been modified, the distribution&#039;s author has mainly brought together several security and privacy conscious utilities into one distribution for portable and daily use. As such, many of the packages that come with the standard install of Privatix have been included since they are included with the standard install of Debian at the time this distribution was made. Please also note that this reference was taken from the main page of the distribution but that, to view it, you will need to translate it (we used Google translate) as much of the documentation for this distribution is in German. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr valign=&amp;quot;top&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;10%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Category&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;15%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Package&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;10%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Version&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;15%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Upstream Source&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;20%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Vintage&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;30%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Package Details&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;Kernel&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt;linux-base&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;2.6.32-31 &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;None Provided&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
		This version of the kernel was released in December of 2009, making it just under two years old. &amp;lt;ref name=&amp;quot;linux kernel&amp;quot;&amp;gt;[http://kernelnewbies.org/Linux_2_6_32 Linux Kernel v2.6.32 Info Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version of the Linux kernel was released just yesterday (11/11/2011), this version being listed as 3.1.1. &amp;lt;ref name=&amp;quot;current kernel&amp;quot;&amp;gt;[http://www.kernel.org/ Current Stable Linux Kernel](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This puts the version of the Linux kernel on Privatix as being two years behind the current stable version of the Linux kernel.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
		We believe that these packages were included within the distribution as they are the standard packages for the Linux kernel included in the standard install of Debian at the time Privatix was released.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;linux-image-2.6.32-5-686&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;2.6.32-31&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;linux-image-2.6-282&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;2.6.32+39&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;libc&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;libc-bin&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;2.11.2-10&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;http://www.eglibc.org&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
		This version of libc was released in January 2011, making it approximately 11 months old.&amp;lt;ref name=&amp;quot;eglibc&amp;quot;&amp;gt;[http://packages.qa.debian.org/e/eglibc.html eglibc Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This version is also the current stable version of libc, as listed on Debian. &amp;lt;ref name=&amp;quot;eglibc&amp;quot;&amp;gt;[http://packages.qa.debian.org/e/eglibc.html eglibc Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, a newer, unstable version (version 2.13-21), is currently undergoing testing.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;This package was included as it is also included in the standard install of Debian as well as that all Linux-based systems come with a version of libc.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;libc6&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Shell&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;bash&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;4.1-3&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://tiswww.case.edu/php/chet/bash/bashtop.html&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This version of bash was released in, approximately, April 2010. &amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; It is also the current stable version of bash. &amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, last month, version 4.2 of bash was pushed into testing and became the current experimental version.&amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This package was included as bash is the version of command line included with the standard install of Debian.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Utilities&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;busybox&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;1:1.17.1-8&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://www.qtsoftware.com/&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This version of busybox was released in, approximately, November 2010. &amp;lt;ref name=&amp;quot;busybox&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/busybox.html busybox Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; It is also the current stable version of busybox as listed on Debian. &amp;lt;ref name=&amp;quot;busybox&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/busybox.html busybox Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This package was included as it the version of busybox included with the standard install of Debian.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
   &lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Software Packaging&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;dpkg&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;1.15.8.10&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://wiki.debian.org/Teams/Dpkg&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of dpkg was released in February of 2011, making it 10 months old. &amp;lt;ref name=&amp;quot;dpkg changelog&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/dpkg/1.15.8.10ubuntu1 Dpkg Changelog](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The current stable version of dpkg, as listed on Debian, is version 1.15.8.11 which was released in April of 2011. &amp;lt;ref name=&amp;quot;dpkg&amp;quot;&amp;gt;[http://packages.qa.debian.org/d/dpkg.html Dpkg Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of dpkg included with Privatix at 3 months behind the latest stable version.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This package was included since dpkg is the package management system of Debian, the distribution that Privatix is based off of.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;Web Browser&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;IceWeasel&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;3.5.16-6&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;None Provided&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of IceWeasel was released in March 2011, making it 9 months old. &amp;lt;ref name=&amp;quot;iceweasel&amp;quot;&amp;gt;[http://packages.qa.debian.org/i/iceweasel.html IceWeasel Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version is version 3.5.16-11 which was released in November 2011. &amp;lt;ref name=&amp;quot;iceweasel&amp;quot;&amp;gt;[http://packages.qa.debian.org/i/iceweasel.html IceWeasel Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of IceWeasel included with Privatix at 9 months behind the latest stable release.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		IceWeasel was included within this distribution as it is a more security conscious browser than more mainstream browsers such as Mozilla Firefox. IceWeasel, an older version of GNU IceCat (a rebranding of FireFox), comes equipped with security features not available by default in FireFox.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Tor&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;0.201029-1&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;https://www.torproject.org&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This version of TOR was released in January 2011, making it 11 months old. &amp;lt;ref name=&amp;quot;tor changelog&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/tor/0.2.1.29-1 TOR Changelog](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The latest stable release of TOR is version 0.2.1.30-1 which was released in July 2011. &amp;lt;ref name=&amp;quot;tor&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/tor TOR on LaunchPad](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This package was included to help increase security, anonymity and privacy while web browsing which is one of the main goals of the Privatix distribution. For more information on TOR, see the Basic Operation section of the report, under Anonymous Web Browsing. &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;TOR Button (xul-ext-torbutton)&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;1.2.5-3&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;https://www.torproject.org/torbutton/&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This version of TOR Button was released in October 2010, making it just over a year old.&amp;lt;ref name=&amp;quot;torbutton&amp;quot;&amp;gt;[https://www.torproject.org/torbutton/ TORButton](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version of this program is version 1.4.4.1 which was released last month. &amp;lt;ref name=&amp;quot;torbutton&amp;quot;&amp;gt;[https://www.torproject.org/torbutton/ TORButton](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of TORButton included with Privatix at about a year behind the latest stable release.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This package was included in order to add to the functionality of TOR. This add-on allows the user to enable and disable TOR with the push of a button, located in the corner of their browser. &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Email&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;icedove&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;3.0.11-1+s&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;None Provided&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of IceDove is the current stable version as listed on Debian. &amp;lt;ref name=&amp;quot;icedove debian&amp;quot;&amp;gt;[http://packages.qadebian.org/i/icedove.html IceDove Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, this version was included within Privatix before it was made stable.  It was released as an unstable version in December 2010 and was later released as the current stable version in October 2011. &amp;lt;ref name=&amp;quot;icedove debian&amp;quot;&amp;gt;[http://packages.qadebian.org/i/icedove.html IceDove Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This email client was included due to the fact that it is a more security-conscious email client, providing government-grade security features, than others such as the regular version of ThunderBird. For more information on this program, refer to the Basic Operation section of this report under Secure Email.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Other&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;pidgin&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;2.7.3.1+sq&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://www.pidgin.im&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of the Pidgin was released in October 2010, and is also the current stable version of Pidgin as listed on Debian. &amp;lt;ref name=&amp;quot;pidgin&amp;quot;&amp;gt;[http://packages.qa.debian.org/p/pidgin.html Pidgin Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This package was included as Pidgin is the default IM client included with the standard install of Debian, the system on which Privatix is based.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Initialization==&lt;br /&gt;
&lt;br /&gt;
Privatix generally follows the same initialization process as Debian. Privatix initializes by first executing first the BIOS then the boot loader code. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; Privatix uses the same boot loader as Debian which is System V initialization. /etc/inittab is the configuration file, with the /sbin/init program initializing the system following the description in this configuration file. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; inittab will set the default run level of Privatix, which is run level 2. Following this, all the scripts located in /etc/rc2.d are executed alphabetically. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; These scripts are:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;S01polipo&#039;&#039;&#039;: polipo web cache--a small and fast caching web proxy&lt;br /&gt;
* &#039;&#039;&#039;S01rsyslog&#039;&#039;&#039;: enhanced multi-thread syslogd which is Linux system logging utility&lt;br /&gt;
* &#039;&#039;&#039;S01sudo&#039;&#039;&#039;: provides sudo&lt;br /&gt;
* &#039;&#039;&#039;S02cron&#039;&#039;&#039;: starts the scheduler of the system&lt;br /&gt;
* &#039;&#039;&#039;S02dbus&#039;&#039;&#039;: utility to send messages between processes and applications&lt;br /&gt;
* &#039;&#039;&#039;S02rsync&#039;&#039;&#039;: opens rsync--a program that allows files to be copied to and from remote machines&lt;br /&gt;
* &#039;&#039;&#039;S02tor&#039;&#039;&#039;: starts TOR (for more information, see above)&lt;br /&gt;
* &#039;&#039;&#039;S03avahi-daemon&#039;&#039;&#039;: starts the zeroconf daemon which is used for configuring the network automatically&lt;br /&gt;
* &#039;&#039;&#039;S03bluetooth&#039;&#039;&#039;: launches bluetooth&lt;br /&gt;
* &#039;&#039;&#039;S03networ-manager&#039;&#039;&#039;: starts a daemon that automatically switches network connections to the best available connection&lt;br /&gt;
* &#039;&#039;&#039;S04openvpn&#039;&#039;&#039;: starts openvpn service--a generic vpn service&lt;br /&gt;
* &#039;&#039;&#039;S05gdm3&#039;&#039;&#039;: script for the GNOME display manager&lt;br /&gt;
* &#039;&#039;&#039;S06bootlogs&#039;&#039;&#039;: the log file handling to be done during bootup--mainly things that don&#039;t need to be done particularly early in the boot process&lt;br /&gt;
* &#039;&#039;&#039;S07rc.local&#039;&#039;&#039;: runs the /etc/rc.local file if it exists--by default this script does nothing, it is used only to exit&lt;br /&gt;
* &#039;&#039;&#039;S07rmologin&#039;&#039;&#039;: removes the /etc/nologin file as the last step in the boot process&lt;br /&gt;
* &#039;&#039;&#039;S07stop-bootlogd&#039;&#039;&#039;: runs the /etc/rc.local file again, if it exists--by default this script does nothing, it is used only to exit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Following this, the system is initialized. The processes running on the newly initialized system and what initializes them are as follows: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot;&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;20%&amp;quot;&amp;gt; &amp;lt;b&amp;gt;Process Name&amp;lt;/b&amp;gt; &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;60%&amp;quot;&amp;gt; &amp;lt;b&amp;gt;Description&amp;lt;/b&amp;gt; &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;20%&amp;quot;&amp;gt; &amp;lt;b&amp;gt;Initialized By&amp;lt;/b&amp;gt; &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; NetworkManager &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Daemon that automatically switches network connections to the best available connection  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S03networ-manager init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; avahi-daemon &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; zeroconf daemon which is used for configuring the network automatically  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S03avahi-daemon init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; bluetoothd &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Enables bluetooth to be used &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S03bluetooth init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; bonobo-activati &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Responsible for the activation of CORBA objects, allowing the browsing of available CORBA servers on your system (running or not), and keeping track of the running servers so one can&#039;t restart an already running server, merely reuse it. &amp;lt;ref name=&amp;quot;Bonobo Activation Tutorial&amp;quot;&amp;gt;[http://developer.gnome.org/bonobo-activation/stable/tutorial.html#id2719173 Bonobo Activation Tutorial](Last accessed 18-12-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; cron &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Scheduler of Debian systems  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S02cron init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; dbus-launch &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Utility to send messages between processes and applications  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S02dbus init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gconfd-2 &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; D &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gdm3 &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td rowspan=&amp;quot;4&amp;quot;&amp;gt; GNOME display manager &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td rowspan=&amp;quot;4&amp;quot;&amp;gt; S05gdm3 init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gnome-screensav &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gnome-settings- &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gnome-terminal &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfs-afc-volume &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td rowspan=&amp;quot;7&amp;quot;&amp;gt; User-space virtual file system.  In gvfs mounts are run as separate processes that which the user can talk to using D-bus. &amp;lt;ref name=&amp;quot;gvfs&amp;quot;&amp;gt;[http://packages.debian.org/sid/gvfs Gvfs Description](Last accessed 12-18-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td rowspan=&amp;quot;7&amp;quot;&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfs-gdu-volume &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfs-gphoto2-vo &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfsd &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfsd-burn &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfsd-metadata &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfsd-trash &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; login &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; D &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; mixer_applet2 &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; D &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; polipo &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; polipo web cache--a small and fast caching web proxy &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S01polipo init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; polkitd &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Polkit is an application level toolkit to allow unprivileged and privileged processes talk to one another. &amp;lt;ref name=&amp;quot;polkit&amp;quot;&amp;gt;[http://live.gnome.org/Seahorse Polkit](Last accessed 12-18-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; seahorse-applet &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; seahorse is an applet to manage encryptions.  Has puguins for nautilus and gedit. &amp;lt;ref name=&amp;quot;seahorse&amp;quot;&amp;gt;[http://live.gnome.org/Seahorse Seahorse-Applet](Last accessed 12-18-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; tor &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; TOR is an open source project meant to provide  anonymity online--mainly preventing anyone from learning your location or browsing habits--by routing webpage requests through virtual tunnels made up of individual TOR nodes. Since no two &amp;quot;paths&amp;quot; for a request are ever the same there is no way for your traffic to be monitored. &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S02tor init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; udisks-daemon &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; udisks-daemon is a block device interface, using D-bus.  The udisks-daemon allows for querying, mounting, unmounting, and formatting of external devices such as USB drives.  It also allows for the creation and modification.&amp;lt;ref name=&amp;quot;udisks-daemon&amp;quot;&amp;gt;[http://packages.debian.org/sid/udisks Udisks-daemon Description](Last accessed 12-18-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We found this information by first confirming that Privatix used the same style of initializing as Debian. Once we ascertained this, we researched the Debian boot process. Privatix followed the same steps up until the loading of the scripts, which had some scripts that differed from Debian (e.g. TOR). Following this, we researched each of the scripts run on Privatix&#039;s default boot level of 2. The scripts are listed above, in the order they execute. To find the purpose of each of the scripts and what programs they opened, we manually went through each of the scripts. To find the processes running on the newly initialized system, we used the command &amp;quot;ps tree&amp;quot;. Once we had a list of the running processes, we researched the purpose of each process. To find how each process was initialized, we manually searched through the initialization scripts and matched each process with their initialization script.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_Privatix&amp;diff=16157</id>
		<title>COMP 3000 2011 Report: Privatix</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_Privatix&amp;diff=16157"/>
		<updated>2011-12-19T02:22:22Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Initialization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Part 1=&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The name of our chosen distribution is the Privatix Live-System. The target audience for this system are people that are concerned about privacy, anonymity and security when web-surfing, transporting/editing sensitive data, sending email etc. Therefore, the goals of this distribution are mainly security and privacy related which means being able to provide security-conscious tools and applications integrated into a portable Operating System (OS) for anyone to use at any time. The distribution is meant to be portable, coming in the form of a live Compact Diak (CD) which can be installed on an external device or a Universal Serial Bus (USB) flash drive with an encrypted password to ensure that all your data remains private, even if your external device is lost or compromised. It should be noted that the live CD is only meant for installing the OS onto a USB in order to provide a portable, privacy conscious OS. The user should not solely rely on the live CD as the OS does not yet implement password protection. This is due to the fact that there are no user accounts on the live CD, user accounts are only implemented once the full OS is installed on a USB. The Privatix Live-System incorporates many security-conscious tools for safe editing, carrying sensitive data, encrypted communication and anonymous web surfing such as built in software to encrypt external devices, IceWeasel and TOR. &amp;lt;ref name=&amp;quot;privatix home&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/index.html.en Privatix home page](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This Privatix Live-System was developed in Germany by Markus Mandalka. It may be obtained by going to Markus Mandalka&#039;s website and navigating to the download page ([http://www.mandalka.name/privatix/download.html.en Mandalka]), selecting the version you wish to download (we chose the English version) and downloading it. The approximate size of the Privatix Live-System is 838 Megabytes (MB) for the full English version (there are smaller versions available which have had features such as GNOME removed).&amp;lt;ref name=&amp;quot;Privatix download page&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/download.html.en Privatix download page](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;. The Privatix Live-System was based off of Debian ([http://distrowatch.com/table.php?distribution=debian Debian]).&lt;br /&gt;
&lt;br /&gt;
==Installation/Startup==&lt;br /&gt;
[[File:PrivatixBoot.png|thumb|right|Privatix boot screen]]&lt;br /&gt;
[[File:PrivatixDesktop.png|thumb|right|Privatix desktop]]&lt;br /&gt;
Currently we have Privatix installed on an 8 Gigabyte (GB) USB stick in order to utilize the full power of the OS. However, Privatix can be used in a few ways other than installing it on an external device such as on a live CD/Digital Video Disk (DVD), or in a virtualized environment such as VirtualBox. It should be noted, however, that the full potential of the OS is only unlocked once the OS has been installed on an external device as it was meant to be. One main flaw in using either a virtual environment or the live CD is that user accounts, and hence password protection, are not implemented until the OS has been installed on an external device.&lt;br /&gt;
&lt;br /&gt;
To install the Privatix-Live System, the user must first download the .iso from the download page ([http://www.mandalka.name/privatix/download.html.en Mandalka]). Once the .iso file is downloaded, it is possible to either burn the operating system to a CD/DVD, use VirtualBox, or install it to a USB stick. &lt;br /&gt;
 &lt;br /&gt;
===CD/DVD===&lt;br /&gt;
	To install and boot Privatix with a CD/DVD, simply burn the operating system to a disk and boot from the CD/DVD created when prompted to in the BIOS.  While using the Live CD, the user will have access to almost all features of the operating system.  However, because no profiles were setup, if the user locks the computer, there will be no way to unlock it as no password was not setup.  Note that the main purpose of the live CD/DVD is to install the OS on an external device.&lt;br /&gt;
&lt;br /&gt;
===VirtualBox===&lt;br /&gt;
	Using VirtualBox requires simply having VirtualBox installed, and when prompted for the installation media to select the .iso file downloaded for Privatix.  &lt;br /&gt;
&lt;br /&gt;
When the system starts up select the Live option.  This brings up the main Desktop, while using VirtualBox the user will have access to all features available when using Privatix with the Live CD.  However there is one small extra level of security, this is provided by the host operating system.  this extra layer of security takes the form of the profile system of the host operating system.&lt;br /&gt;
&lt;br /&gt;
===USB===&lt;br /&gt;
	To install Privatix onto a USB stick, you first must be booted into Privatix Live through a CD/DVD.  Then you need to click the install icon on the Desktop to begin installing to a device.  It is then possible to select a device for Privatix to install itself on.  The installer will ask you if you would like to fill your device with blank data, this makes accessing data/recovering what was originally on the device much harder.  The installer will prompt you for a user password, as well as an admin password.  The installer will then start it&#039;s time consuming process of installing Privatix to the device.&lt;br /&gt;
&lt;br /&gt;
To boot into Privatix from the device, you can stop the computer booting, and then boot into the external device.  During booting, Privatix will prompt you for your the password set up during the installation.&lt;br /&gt;
&lt;br /&gt;
==Basic Operation==&lt;br /&gt;
&lt;br /&gt;
===On An External Device===&lt;br /&gt;
&lt;br /&gt;
The main way of utilizing the Privatix Live-System is done by installing the system on an external device.  In our case, we used an 8 GB USB stick. When the system is installed on an external device, it is easy to use the system for its intended purpose--having portable anonymous and secure system. We tested this portable version of the system on several laptops with no trouble and no noticeable discretion in use between the different machines. We attempted to use the the system for the following use cases: anonymous web browsing, secure email, data encryption and secure data transportation. &lt;br /&gt;
&lt;br /&gt;
Apart from this, Privatix also came with OpenOffice applications for editing all types of data and much of the basic GNOME functionality, including (but not limited to):&lt;br /&gt;
* Pidgin IM and Empathy IM Client for instant messaging&lt;br /&gt;
* Evolution Mail for sending and retrieving email&lt;br /&gt;
* gedit for text editing&lt;br /&gt;
&lt;br /&gt;
====Anonymous Web Browsing====&lt;br /&gt;
[[File:TOR.png|thumb|right|TOR is enabled by default in Privatix]]&lt;br /&gt;
The main thing we liked about this system was the secure and anonymous web browsing. The default browser in the system is IceWeasal (an older version of GNU IceCat--a re-branding of FireFox compatible with both Linux and Mac systems) which comes equipped with security features not available by default in FireFox. The main add on that I liked was that The Onion Router (TOR) is installed and enabled by default (it can be disabled if the user wishes). TOR is an open source project meant to provide absolute anonymity online--mainly preventing anyone from learning your location or browsing habits--by routing webpage requests through virtual tunnels made up of individual TOR nodes. Since no two &amp;quot;paths&amp;quot; for a request are ever the same there is no way for your traffic to be monitored. &amp;lt;ref name=&amp;quot;TOR Project - About&amp;quot;&amp;gt;[https://www.torproject.org/about/overview.html.en TOR Project - About](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Secure Email====&lt;br /&gt;
&lt;br /&gt;
The Privatix Live-System also came equipped with the security-conscious email client IceDove--an unbranded ThunderBird mail client (a cross-platform email client that provides government-grade security features). The email client was easily setup and used, supporting digital signing and message encryption via certificates by default (as with TOR, this could be disabled if the user wished). &amp;lt;ref name=&amp;quot;icedove&amp;quot;&amp;gt;[http://packages.debian.org/sid/icedove IceDove](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Encryption====&lt;br /&gt;
[[File:Encrypt.jpg|thumb|right|Software to encrypt external device]]&lt;br /&gt;
The Privatix Live-System also has the ability to encrypt external devices (besides the external device that the system is installed on). This meant that we could have an unlimited amount of encrypted data, not being limited to the size of the external device that the system itself is installed on. The ability to encrypt secondary external devices is very handy as much of the space on the external device that Privatix is installed on is taken up by the system itself, especially if one fills the device with blank decoy data on installation. The encryption software was easily used, well designed and was able to be utilized by absolute beginners of the system.&lt;br /&gt;
&lt;br /&gt;
====Secure Data Transportation====&lt;br /&gt;
There are two ways that Privatix fulfills its secure data transportation goal:&lt;br /&gt;
# When saving data on the external device with the Privatix Live-System, the data is automatically encrypted and is also password protected (since the portable version of Privatix requires a password to use it). &amp;lt;ref name=&amp;quot;Privatix FAQ&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/index.html.en Privatix FAQ](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
# As mentioned above, Privatix allows for the encryption of secondary external devices, hence meaning that data can be securely transported without even having the Privatix Live-System with you.&lt;br /&gt;
&lt;br /&gt;
====General Use====&lt;br /&gt;
&lt;br /&gt;
Even with the additional security features not available in other distributions, Privatix would still be a very desirable live system to use. It is portable, especially once installed on an external device, and easily used with little bloatware. The default applications such as OpenOffice for data editing, Pidgin for instant messaging, various graphics editors, video player, and CD burner/extractor ensured that they system was still perfectly functional for everyday use, even with security, not intense functionality, being the main focus.&lt;br /&gt;
&lt;br /&gt;
===Live CD and Virtual Box===&lt;br /&gt;
&lt;br /&gt;
We found that running Privatix using the live CD and VirtualBox was equivalent. &lt;br /&gt;
&lt;br /&gt;
When booting the live CD in VirtualBox, there are certain key features of the Privatix Live-System you are missing (mainly because these features are meant for the portable version to be installed on an external device). However, just booting from the live CD still gives a lot of the functionality I would use the system for--mainly the anonymous web browsing, secure email and data encryption. The key differences were the lack of portability and the inability to save any data on the live CD or VirtualBox environment.&lt;br /&gt;
&lt;br /&gt;
When using only the live CD or VirtualBox all files are deleted when the system is shut down. In addition to, any files saved to the desktop by the user will not appear.  They will be hidden from view, but can be viewed by opening the terminal and navigating to the desktop and running the ls command.&lt;br /&gt;
&lt;br /&gt;
The main flaw in only using the system in these mediums is that the added protection of having a user account and password to access the system are no present on the live CD or when using Privatix in a virtual machine. This is due to the fact that Privatix does not implement user accounts and password protection until it has been fully installed onto an external device. As the main goal of this distribution is privacy, it would be highly recommended that the user fully install the OS onto an external device for the added security of password protection.&lt;br /&gt;
&lt;br /&gt;
==Usage Evaluation==&lt;br /&gt;
&lt;br /&gt;
During our use of Privatix, we found it preformed on par for what it was described as, a secure and portable system. The tools provided to encrypt data and the secure browser with add-ons for anonymity especially supported this belief. However we also found some parts of the distribution that were a cause for concern.  To begin with, there was a slight language barrier as the system was originally written in German. This was made apparent by the frequent grammar mistakes in both the existing English documentation and the operating system itself indicating that English was not the primary language for the writers of this operating system. Most of the documentation for the operating system is also in German. Those who maintain Privatix and its project website are in the process of translating all their documentation as to be available in both English and German, though currently most of the supporting documentation and FAQ are in German. This made it hard to troubleshoot anything that went wrong with the system during installation or use. &lt;br /&gt;
&lt;br /&gt;
We also noticed that there was no wireless drivers on either portable versions of the OS (installed on an external device or simply using the Live CD or booting up in a virtual machine) so wireless networks could not be connected to.  This causes a problem because an operating system on a USB stick should be completely portable, however this driver requires you to have a hard line to use the Internet. It was also noticed that when using Privatix in VirtualBox that even though there was no wireless drivers in Privatix, the wireless capability was provided by the host OS (Windows).&lt;br /&gt;
&lt;br /&gt;
Lastly, when we tried to install Privatix onto a USB it took several attempts. We discovered that to avoid many of the problems we encountered, it is better to use a larger (preferably at least 8GBs) external device for installation and to defer from filling the external device with blank decoy data during installation on an external device.&lt;br /&gt;
&lt;br /&gt;
However once connected to the Internet, all software seems to work as it should. The more basic applications such as OpenOffice, the instant messaging and email clients, multimedia applications etc. function with no problems encountered, working much as they do in any other Linux distribution. The security tools also seem to work as they should. However since we do not know how to test the limits of its security measures we do not know for sure how secure these programs actually are. Overall, Privatix seems to be a very functional and portable distribution, allowing users access to standard applications for tasks such as editing and transporting data, sending/receiving email, instant messaging and multimedia applications with the added bonus of being completely secure and anonymous.&lt;br /&gt;
&lt;br /&gt;
=Part 2=&lt;br /&gt;
==Software Packaging==&lt;br /&gt;
[[File:dpkg_out.png|thumb|right|Package listing, dpkg]]&lt;br /&gt;
[[File:aptitude_out.png|thumb|right|Package listing, aptitude]]&lt;br /&gt;
The packaging format that was used for the Privatix-Live System was DEB (based on the Debian packaging format). &amp;lt;ref name=&amp;quot;privatix distrowatch&amp;quot;&amp;gt;[http://distrowatch.com/table.php?distribution=privatix Privatix Distrowatch Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The utilities used with this packaging format were dpkg and aptitude. Dpkg is used as the operating system&#039;s package management utility, with aptitude acting as the more user-friendly front end version. Aptitude made finding a list of installed packages quite easy. Aptitude allows you to see a full list of installed packages, with the packages being segregated into categories such as mail, web, shells and utils. As well as using aptitude, the command line can be used to access a list of installed packages. To do this, input the following in terminal and a list of all installed packages is generated. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -l&lt;br /&gt;
&lt;br /&gt;
Though knowing how to do this in command line is useful, we found that using aptitude was generally better as the packages are segregated into categories which made viewing the list of installed packages more simple. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To add a package within Privatix, we found the easiest way was to use the one of the following commands provided by dpkg:&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -i &amp;lt;package name&amp;gt;&lt;br /&gt;
          or &lt;br /&gt;
 $ dpkg --install &amp;lt;package name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These commands function in the same way which means they will either install a package, or upgrade already installed versions of the package. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To remove a  package within Privatix, we found the easist way was to use either of the following commands provided by dpkg:&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -r &amp;lt;package name&amp;gt;&lt;br /&gt;
          or &lt;br /&gt;
 $ dpkg -P &amp;lt;package name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When using &amp;quot;dpkg -r &amp;lt;package name&amp;gt;&amp;quot;, everything related to the package &#039;&#039;except&#039;&#039; the configuration files are removed. To fully remove a package, however, we used &amp;quot;dpkg -P &amp;lt;package name&amp;gt;&amp;quot; which removes the entire package, including the configuration files. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We found that the software catalog for this distribution was quite extensive, especially since this distribution is meant to be portable. Privatix includes all the standard packages included with Debian (e.g. libc), as well as several other utilities meant to increase security and privacy while using the system such as IceDove, TOR and TORButton.&lt;br /&gt;
&lt;br /&gt;
==Major Package Versions==&lt;br /&gt;
&lt;br /&gt;
For this section of the report, we needed to determine how heavily modified by the distribution&#039;s author packages included within our distribution were. However, the distribution&#039;s author has stated that everything included is mainly based on Debian. &amp;lt;ref name=&amp;quot;privatix documentation&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/doc.html Privatix Documentation (German)](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The packages within Privatix have not been modified, the distribution&#039;s author has mainly brought together several security and privacy conscious utilities into one distribution for portable and daily use. As such, many of the packages that come with the standard install of Privatix have been included since they are included with the standard install of Debian at the time this distribution was made. Please also note that this reference was taken from the main page of the distribution but that, to view it, you will need to translate it (we used Google translate) as much of the documentation for this distribution is in German. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr valign=&amp;quot;top&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;10%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Category&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;15%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Package&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;10%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Version&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;15%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Upstream Source&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;20%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Vintage&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th width=&amp;quot;30%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Package Details&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;Kernel&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt;linux-base&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;2.6.32-31 &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;None Provided&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
		This version of the kernel was released in December of 2009, making it just under two years old. &amp;lt;ref name=&amp;quot;linux kernel&amp;quot;&amp;gt;[http://kernelnewbies.org/Linux_2_6_32 Linux Kernel v2.6.32 Info Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version of the Linux kernel was released just yesterday (11/11/2011), this version being listed as 3.1.1. &amp;lt;ref name=&amp;quot;current kernel&amp;quot;&amp;gt;[http://www.kernel.org/ Current Stable Linux Kernel](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This puts the version of the Linux kernel on Privatix as being two years behind the current stable version of the Linux kernel.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
		We believe that these packages were included within the distribution as they are the standard packages for the Linux kernel included in the standard install of Debian at the time Privatix was released.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;linux-image-2.6.32-5-686&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;2.6.32-31&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;linux-image-2.6-282&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;2.6.32+39&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;libc&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;libc-bin&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;2.11.2-10&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;http://www.eglibc.org&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
		This version of libc was released in January 2011, making it approximately 11 months old.&amp;lt;ref name=&amp;quot;eglibc&amp;quot;&amp;gt;[http://packages.qa.debian.org/e/eglibc.html eglibc Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This version is also the current stable version of libc, as listed on Debian. &amp;lt;ref name=&amp;quot;eglibc&amp;quot;&amp;gt;[http://packages.qa.debian.org/e/eglibc.html eglibc Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, a newer, unstable version (version 2.13-21), is currently undergoing testing.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;2&amp;quot;&amp;gt;This package was included as it is also included in the standard install of Debian as well as that all Linux-based systems come with a version of libc.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;libc6&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Shell&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;bash&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;4.1-3&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://tiswww.case.edu/php/chet/bash/bashtop.html&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This version of bash was released in, approximately, April 2010. &amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; It is also the current stable version of bash. &amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, last month, version 4.2 of bash was pushed into testing and became the current experimental version.&amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This package was included as bash is the version of command line included with the standard install of Debian.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Utilities&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;busybox&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;1:1.17.1-8&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://www.qtsoftware.com/&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This version of busybox was released in, approximately, November 2010. &amp;lt;ref name=&amp;quot;busybox&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/busybox.html busybox Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; It is also the current stable version of busybox as listed on Debian. &amp;lt;ref name=&amp;quot;busybox&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/busybox.html busybox Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This package was included as it the version of busybox included with the standard install of Debian.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
   &lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Software Packaging&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;dpkg&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;1.15.8.10&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://wiki.debian.org/Teams/Dpkg&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of dpkg was released in February of 2011, making it 10 months old. &amp;lt;ref name=&amp;quot;dpkg changelog&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/dpkg/1.15.8.10ubuntu1 Dpkg Changelog](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The current stable version of dpkg, as listed on Debian, is version 1.15.8.11 which was released in April of 2011. &amp;lt;ref name=&amp;quot;dpkg&amp;quot;&amp;gt;[http://packages.qa.debian.org/d/dpkg.html Dpkg Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of dpkg included with Privatix at 3 months behind the latest stable version.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;This package was included since dpkg is the package management system of Debian, the distribution that Privatix is based off of.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;3&amp;quot;&amp;gt;Web Browser&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;IceWeasel&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;3.5.16-6&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;None Provided&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of IceWeasel was released in March 2011, making it 9 months old. &amp;lt;ref name=&amp;quot;iceweasel&amp;quot;&amp;gt;[http://packages.qa.debian.org/i/iceweasel.html IceWeasel Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version is version 3.5.16-11 which was released in November 2011. &amp;lt;ref name=&amp;quot;iceweasel&amp;quot;&amp;gt;[http://packages.qa.debian.org/i/iceweasel.html IceWeasel Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of IceWeasel included with Privatix at 9 months behind the latest stable release.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		IceWeasel was included within this distribution as it is a more security conscious browser than more mainstream browsers such as Mozilla Firefox. IceWeasel, an older version of GNU IceCat (a rebranding of FireFox), comes equipped with security features not available by default in FireFox.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Tor&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;0.201029-1&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;https://www.torproject.org&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This version of TOR was released in January 2011, making it 11 months old. &amp;lt;ref name=&amp;quot;tor changelog&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/tor/0.2.1.29-1 TOR Changelog](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The latest stable release of TOR is version 0.2.1.30-1 which was released in July 2011. &amp;lt;ref name=&amp;quot;tor&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/tor TOR on LaunchPad](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This package was included to help increase security, anonymity and privacy while web browsing which is one of the main goals of the Privatix distribution. For more information on TOR, see the Basic Operation section of the report, under Anonymous Web Browsing. &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;TOR Button (xul-ext-torbutton)&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;1.2.5-3&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;https://www.torproject.org/torbutton/&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This version of TOR Button was released in October 2010, making it just over a year old.&amp;lt;ref name=&amp;quot;torbutton&amp;quot;&amp;gt;[https://www.torproject.org/torbutton/ TORButton](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version of this program is version 1.4.4.1 which was released last month. &amp;lt;ref name=&amp;quot;torbutton&amp;quot;&amp;gt;[https://www.torproject.org/torbutton/ TORButton](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of TORButton included with Privatix at about a year behind the latest stable release.&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
		This package was included in order to add to the functionality of TOR. This add-on allows the user to enable and disable TOR with the push of a button, located in the corner of their browser. &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Email&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;icedove&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;3.0.11-1+s&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;None Provided&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of IceDove is the current stable version as listed on Debian. &amp;lt;ref name=&amp;quot;icedove debian&amp;quot;&amp;gt;[http://packages.qadebian.org/i/icedove.html IceDove Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, this version was included within Privatix before it was made stable.  It was released as an unstable version in December 2010 and was later released as the current stable version in October 2011. &amp;lt;ref name=&amp;quot;icedove debian&amp;quot;&amp;gt;[http://packages.qadebian.org/i/icedove.html IceDove Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This email client was included due to the fact that it is a more security-conscious email client, providing government-grade security features, than others such as the regular version of ThunderBird. For more information on this program, refer to the Basic Operation section of this report under Secure Email.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;Other&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;pidgin&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;2.7.3.1+sq&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;http://www.pidgin.im&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This version of the Pidgin was released in October 2010, and is also the current stable version of Pidgin as listed on Debian. &amp;lt;ref name=&amp;quot;pidgin&amp;quot;&amp;gt;[http://packages.qa.debian.org/p/pidgin.html Pidgin Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td rowspan=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		This package was included as Pidgin is the default IM client included with the standard install of Debian, the system on which Privatix is based.&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Initialization==&lt;br /&gt;
&lt;br /&gt;
Privatix generally follows the same initialization process as Debian. Privatix initializes by first executing first the BIOS then the boot loader code. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; Privatix uses the same boot loader as Debian which is System V initialization. /etc/inittab is the configuration file, with the /sbin/init program initializing the system following the description in this configuration file. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; inittab will set the default run level of Privatix, which is run level 2. Following this, all the scripts located in /etc/rc2.d are executed alphabetically. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; These scripts are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot;&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;20%&amp;quot;&amp;gt; &amp;lt;b&amp;gt;Process Name&amp;lt;/b&amp;gt; &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;60%&amp;quot;&amp;gt; &amp;lt;b&amp;gt;Description&amp;lt;/b&amp;gt; &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;th width=&amp;quot;20%&amp;quot;&amp;gt; &amp;lt;b&amp;gt;Initialized By&amp;lt;/b&amp;gt; &amp;lt;/th&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; NetworkManager &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Daemon that automatically switches network connections to the best available connection  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S03networ-manager init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; avahi-daemon &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; zeroconf daemon which is used for configuring the network automatically  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S03avahi-daemon init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; bluetoothd &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Enables bluetooth to be used &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S03bluetooth init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; bonobo-activati &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Responsible for the activation of CORBA objects, allowing the browsing of available CORBA servers on your system (running or not), and keeping track of the running servers so one can&#039;t restart an already running server, merely reuse it. &amp;lt;ref name=&amp;quot;Bonobo Activation Tutorial&amp;quot;&amp;gt;[http://developer.gnome.org/bonobo-activation/stable/tutorial.html#id2719173 Bonobo Activation Tutorial](Last accessed 18-12-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; cron &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Scheduler of Debian systems  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S02cron init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; dbus-launch &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; Utility to send messages between processes and applications  &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S02dbus init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gconfd-2 &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; D &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gdm3 &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td rowspan=&amp;quot;4&amp;quot;&amp;gt; GNOME display manager &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td rowspan=&amp;quot;4&amp;quot;&amp;gt; S05gdm3 init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gnome-screensav &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gnome-settings- &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gnome-terminal &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfs-afc-volume &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td rowspan=&amp;quot;7&amp;quot;&amp;gt; D &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td rowspan=&amp;quot;7&amp;quot;&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfs-gdu-volume &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfs-gphoto2-vo &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfsd &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfsd-burn &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfsd-metadata &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; gvfsd-trash &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; login &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; D &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; mixer_appler2 &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; D &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; polipo &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; polipo web cache--a small and fast caching web proxy &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S01polipo init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; polkitd &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; D &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; seahorse-applet &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; D &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; tor &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; TOR is an open source project meant to provide  anonymity online--mainly preventing anyone from learning your location or browsing habits--by routing webpage requests through virtual tunnels made up of individual TOR nodes. Since no two &amp;quot;paths&amp;quot; for a request are ever the same there is no way for your traffic to be monitored. &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; S02tor init script &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; udisks-daemon &amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; udisks-daemon is a block device interface, using D-bus.  The udisks-daemon allows for querying, mounting, unmounting, and formatting of external devices such as USB drives.  It also allows for the creation and modification.&amp;lt;ref name=&amp;quot;udisks-daemon&amp;quot;&amp;gt;[http://packages.debian.org/sid/udisks Udisks-daemon Description](Last accessed 12-18-11)&amp;lt;/ref&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
		&amp;lt;td&amp;gt; IB &amp;lt;/td&amp;gt;&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;S01polipo&#039;&#039;&#039;: polipo web cache--a small and fast caching web proxy&lt;br /&gt;
* &#039;&#039;&#039;S01rsyslog&#039;&#039;&#039;: enhanced multi-thread syslogd which is Linux system logging utility&lt;br /&gt;
* &#039;&#039;&#039;S01sudo&#039;&#039;&#039;: provides sudo&lt;br /&gt;
* &#039;&#039;&#039;S02cron&#039;&#039;&#039;: starts the scheduler of the system&lt;br /&gt;
* &#039;&#039;&#039;S02dbus&#039;&#039;&#039;: utility to send messages between processes and applications&lt;br /&gt;
* &#039;&#039;&#039;S02rsync&#039;&#039;&#039;: opens rsync--a program that allows files to be copied to and from remote machines&lt;br /&gt;
* &#039;&#039;&#039;S02tor&#039;&#039;&#039;: starts TOR (for more information, see above)&lt;br /&gt;
* &#039;&#039;&#039;S03avahi-daemon&#039;&#039;&#039;: starts the zeroconf daemon which is used for configuring the network automatically&lt;br /&gt;
* &#039;&#039;&#039;S03bluetooth&#039;&#039;&#039;: launches bluetooth&lt;br /&gt;
* &#039;&#039;&#039;S03networ-manager&#039;&#039;&#039;: starts a daemon that automatically switches network connections to the best available connection&lt;br /&gt;
* &#039;&#039;&#039;S04openvpn&#039;&#039;&#039;: starts openvpn service--a generic vpn service&lt;br /&gt;
* &#039;&#039;&#039;S05gdm3&#039;&#039;&#039;: script for the GNOME display manager&lt;br /&gt;
* &#039;&#039;&#039;S06bootlogs&#039;&#039;&#039;: the log file handling to be done during bootup--mainly things that don&#039;t need to be done particularly early in the boot process&lt;br /&gt;
* &#039;&#039;&#039;S07rc.local&#039;&#039;&#039;: runs the /etc/rc.local file if it exists--by default this script does nothing, it is used only to exit&lt;br /&gt;
* &#039;&#039;&#039;S07rmologin&#039;&#039;&#039;: removes the /etc/nologin file as the last step in the boot process&lt;br /&gt;
* &#039;&#039;&#039;S07stop-bootlogd&#039;&#039;&#039;: runs the /etc/rc.local file again, if it exists--by default this script does nothing, it is used only to exit&lt;br /&gt;
&lt;br /&gt;
Following this, the system is initialized.&lt;br /&gt;
&lt;br /&gt;
We found this information by first confirming that Privatix used the same style of initializing as Debian. Once we ascertained this, we researched the Debian boot process. Privatix followed the same steps up until the loading of the scripts, which had some scripts that differed from Debian (e.g. TOR). Following this, we researched each of the scripts run on Privatix&#039;s default boot level of 2. The scripts are listed above, in the order they execute. To find the purpose of each of the scripts and what programs they opened, we manually went through each of the scripts.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Week_11_Notes&amp;diff=15623</id>
		<title>COMP 3000 2011 Week 11 Notes</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Week_11_Notes&amp;diff=15623"/>
		<updated>2011-12-15T18:35:10Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: Created page with &amp;quot;==Networking==  It should be noted that, if networking were taken away, there would be no need for modern operating systems. Modern operating system architecture is very good for…&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Networking==&lt;br /&gt;
&lt;br /&gt;
It should be noted that, if networking were taken away, there would be no need for modern operating systems. Modern operating system architecture is very good for networking--the kernel handles receiving messages, processing the message and sending it to the appropriate procedure. The kernel will receive an interrupt from the network card, notifying it that it has a message to process and send.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Networking&#039;&#039;&#039;: communication between point A and point B&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Packet&#039;&#039;&#039;: fixed size chunk of data (normally 1500 bytes). If one wants to send a package larger than the fixed size, the larger packet is fragmented into smaller ones. Packets can be repeatedly &amp;quot;wrapped&amp;quot; (e.g. data is wrapped in an IP packet, which is wrapped in an ethernet packet, which is wrapped in a DSL packet etc. and, once the destination is reached, all of this is stripped).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;TCP/IP&#039;&#039;&#039;: one way to format a packet (the &amp;quot;to&amp;quot; and &amp;quot;from&amp;quot; fields associated with the packet are numbers--IP addresses--while the contents of the &amp;quot;message&amp;quot; is the data to be sent)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Routers&#039;&#039;&#039;: copies packets and relays them from router to router until the destination is reached. However, along the way, a packet may be damaged/dropped. This system of sending packets is a &amp;quot;best effort&amp;quot; practice. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;TCP&#039;&#039;&#039;: notion of retransmission when something is lost, reordering of packets; will interpret packet loss as congestion and will slow down&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sockets&#039;&#039;&#039;: file like abstraction of the operating system&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ports&#039;&#039;&#039;: ID for operating system to distinguish which process gets the data&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Streaming Protocols&#039;&#039;&#039;: just wants datagrams, continuous stream, usually for things like media. If packets are lost, just move on. This means that data can be lost, but everything is still alright, there is only a degradation in quality (i.e. the entire thing doesn&#039;t stop working).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Firewall&#039;&#039;&#039;: filters network traffic, often by blocking ports. A standard firewall works on TCP or IP levels. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DPI&#039;&#039;&#039;: deep packet inspection means that the data of the packet is what is being looked at, not the &amp;quot;wrappers&amp;quot; around the data&lt;br /&gt;
&lt;br /&gt;
Time line of how data is sent to processes:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Data&#039;&#039;&#039; -&amp;gt; &#039;&#039;&#039;Ethernet Card&#039;&#039;&#039; (via ethernet card MAC address) -&amp;gt; &#039;&#039;&#039;Driver&#039;&#039;&#039; (interrupt associated with ethernet card) -&amp;gt; data is processed and given to the right process -&amp;gt; &#039;&#039;&#039;network stack&#039;&#039;&#039; (makes available to right process; puts the data in the right buffer) -&amp;gt; data now available to process&lt;br /&gt;
&lt;br /&gt;
Note that the driver, processing the data, and network stack steps of this process are all fulfilled by the kernel.&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Week_10_Notes&amp;diff=15622</id>
		<title>COMP 3000 2011 Week 10 Notes</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Week_10_Notes&amp;diff=15622"/>
		<updated>2011-12-15T18:11:37Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: Created page with &amp;quot;==Virtual Memory==  &amp;#039;&amp;#039;&amp;#039;Virtual Memory&amp;#039;&amp;#039;&amp;#039;: memory management technique; made of 32 bits divided into a 20 bit frame number and 12 bit offset that maps perfectly to a physical offs…&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Virtual Memory==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Virtual Memory&#039;&#039;&#039;: memory management technique; made of 32 bits divided into a 20 bit frame number and 12 bit offset that maps perfectly to a physical offset&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;32 bit address&#039;&#039;&#039;: first 20 bits of the address is the frame number with the lower 12 bits specifying the offset within that frame&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Physical Memory&#039;&#039;&#039;: divided into frames where each frame is 4K--note that each page is also 4K meaning that any page can fit in any frame (solves fragmentation problem)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;TLB&#039;&#039;&#039;: mapping virtual addresses to corresponding physical ones--this is done by mapping the 20 bit frame numbers, located in the first 20 bits of a 32 bit address, to each other. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Page Table&#039;&#039;&#039;: all virtual -&amp;gt; physical mappings are stored in the page table; when constructing a page table, a broad flat tree is desired (a good out degree for this is 1024). Page tables make use of the 4K structure of the system by having 1024 page table entries of 32 bits, which will make 4K bits.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;PTE&#039;&#039;&#039;: page table entries, made of 32 bits in most archives. &lt;br /&gt;
&lt;br /&gt;
===Managing Memory===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;LRU&#039;&#039;&#039;: least recently used--the least recently used page will get kicked out of memory&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Disk Cache&#039;&#039;&#039;: cache of files on disk--the problem is figuring out what memory to evict first. The memory you can usually evict first is: &lt;br /&gt;
&lt;br /&gt;
# Memory that is already written to disk&lt;br /&gt;
# Files and program binaries as they are already synced to disk&lt;br /&gt;
&lt;br /&gt;
Other memory that is safe to evict is memory that has both the dirty bit and the accessed bit flipped to 0. &lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Dirty Bit&#039;&#039;&#039;: if what is in memory is only a copy, meaning that, if the page is kicked out of memory, there is no memory loss, the dirty bit is flipped to 0&lt;br /&gt;
* &#039;&#039;&#039;Accessed Bit&#039;&#039;&#039;: has the memory this bit pertains to been accessed or not? A clock algorithm runs routinely and sets the access bit of all memory to zero. Next time the algorithm goes through,if you see a 1 somewhere, the memory has been accessed since the last sweep, meaning we should keep it. However, if somewhere still has a 0, it means that the memory has not been accessed since the last sweep&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_Privatix&amp;diff=15152</id>
		<title>COMP 3000 2011 Report: Privatix</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_2011_Report:_Privatix&amp;diff=15152"/>
		<updated>2011-12-04T15:19:38Z</updated>

		<summary type="html">&lt;p&gt;Gbooth: /* Major Package Versions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Part 1=&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The name of our chosen distribution is the Privatix Live-System. The target audience for this system are people that are concerned about privacy, anonymity and security when web-surfing, transporting/editing sensitive data, sending email etc. Therefore, the goals of this distribution are mainly security and privacy related which means being able to provide security-conscious tools and applications integrated into a portable Operating System (OS) for anyone to use at any time. The distribution is meant to be portable, coming in the form of a live Compact Diak (CD) which can be installed on an external device or a Universal Serial Bus (USB) flash drive with an encrypted password to ensure that all your data remains private, even if your external device is lost or compromised. It should be noted that the live CD is only meant for installing the OS onto a USB in order to provide a portable, privacy conscious OS. The user should not solely rely on the live CD as the OS does not yet implement password protection. This is due to the fact that there are no user accounts on the live CD, user accounts are only implemented once the full OS is installed on a USB. The Privatix Live-System incorporates many security-conscious tools for safe editing, carrying sensitive data, encrypted communication and anonymous web surfing such as built in software to encrypt external devices, IceWeasel and TOR. &amp;lt;ref name=&amp;quot;privatix home&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/index.html.en Privatix home page](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This Privatix Live-System was developed in Germany by Markus Mandalka. It may be obtained by going to Markus Mandalka&#039;s website and navigating to the download page ([http://www.mandalka.name/privatix/download.html.en Mandalka]), selecting the version you wish to download (we chose the English version) and downloading it. The approximate size of the Privatix Live-System is 838 Megabytes (MB) for the full English version (there are smaller versions available which have had features such as GNOME removed).&amp;lt;ref name=&amp;quot;Privatix download page&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/download.html.en Privatix download page](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;. The Privatix Live-System was based off of Debian ([http://distrowatch.com/table.php?distribution=debian Debian]).&lt;br /&gt;
&lt;br /&gt;
==Installation/Startup==&lt;br /&gt;
[[File:PrivatixBoot.png|thumb|right|Privatix boot screen]]&lt;br /&gt;
[[File:PrivatixDesktop.png|thumb|right|Privatix desktop]]&lt;br /&gt;
Currently we have Privatix installed on an 8 Gigabyte (GB) USB stick in order to utilize the full power of the OS. However, Privatix can be used in a few ways other than installing it on an external device such as on a live CD/Digital Video Disk (DVD), or in a virtualized environment such as VirtualBox. It should be noted, however, that the full potential of the OS is only unlocked once the OS has been installed on an external device as it was meant to be. One main flaw in using either a virtual environment or the live CD is that user accounts, and hence password protection, are not implemented until the OS has been installed on an external device.&lt;br /&gt;
&lt;br /&gt;
To install the Privatix-Live System, the user must first download the .iso from the download page ([http://www.mandalka.name/privatix/download.html.en Mandalka]). Once the .iso file is downloaded, it is possible to either burn the operating system to a CD/DVD, use VirtualBox, or install it to a USB stick. &lt;br /&gt;
 &lt;br /&gt;
===CD/DVD===&lt;br /&gt;
	To install and boot Privatix with a CD/DVD, simply burn the operating system to a disk and boot from the CD/DVD created when prompted to in the BIOS.  While using the Live CD, the user will have access to almost all features of the operating system.  However, because no profiles were setup, if the user locks the computer, there will be no way to unlock it as no password was not setup.  Note that the main purpose of the live CD/DVD is to install the OS on an external device.&lt;br /&gt;
&lt;br /&gt;
===VirtualBox===&lt;br /&gt;
	Using VirtualBox requires simply having VirtualBox installed, and when prompted for the installation media to select the .iso file downloaded for Privatix.  &lt;br /&gt;
&lt;br /&gt;
When the system starts up select the Live option.  This brings up the main Desktop, while using VirtualBox the user will have access to all features available when using Privatix with the Live CD.  However there is one small extra level of security, this is provided by the host operating system.  this extra layer of security takes the form of the profile system of the host operating system.&lt;br /&gt;
&lt;br /&gt;
===USB===&lt;br /&gt;
	To install Privatix onto a USB stick, you first must be booted into Privatix Live through a CD/DVD.  Then you need to click the install icon on the Desktop to begin installing to a device.  It is then possible to select a device for Privatix to install itself on.  The installer will ask you if you would like to fill your device with blank data, this makes accessing data/recovering what was originally on the device much harder.  The installer will prompt you for a user password, as well as an admin password.  The installer will then start it&#039;s time consuming process of installing Privatix to the device.&lt;br /&gt;
&lt;br /&gt;
To boot into Privatix from the device, you can stop the computer booting, and then boot into the external device.  During booting, Privatix will prompt you for your the password set up during the installation.&lt;br /&gt;
&lt;br /&gt;
==Basic Operation==&lt;br /&gt;
&lt;br /&gt;
===On An External Device===&lt;br /&gt;
&lt;br /&gt;
The main way of utilizing the Privatix Live-System is done by installing the system on an external device.  In our case, we used an 8 GB USB stick. When the system is installed on an external device, it is easy to use the system for its intended purpose--having portable anonymous and secure system. We tested this portable version of the system on several laptops with no trouble and no noticeable discretion in use between the different machines. We attempted to use the the system for the following use cases: anonymous web browsing, secure email, data encryption and secure data transportation. &lt;br /&gt;
&lt;br /&gt;
Apart from this, Privatix also came with OpenOffice applications for editing all types of data and much of the basic GNOME functionality, including (but not limited to):&lt;br /&gt;
* Pidgin IM and Empathy IM Client for instant messaging&lt;br /&gt;
* Evolution Mail for sending and retrieving email&lt;br /&gt;
* gedit for text editing&lt;br /&gt;
&lt;br /&gt;
====Anonymous Web Browsing====&lt;br /&gt;
[[File:TOR.png|thumb|right|TOR is enabled by default in Privatix]]&lt;br /&gt;
The main thing we liked about this system was the secure and anonymous web browsing. The default browser in the system is IceWeasal (an older version of GNU IceCat--a re-branding of FireFox compatible with both Linux and Mac systems) which comes equipped with security features not available by default in FireFox. The main add on that I liked was that The Onion Router (TOR) is installed and enabled by default (it can be disabled if the user wishes). TOR is an open source project meant to provide absolute anonymity online--mainly preventing anyone from learning your location or browsing habits--by routing webpage requests through virtual tunnels made up of individual TOR nodes. Since no two &amp;quot;paths&amp;quot; for a request are ever the same there is no way for your traffic to be monitored. &amp;lt;ref name=&amp;quot;TOR Project - About&amp;quot;&amp;gt;[https://www.torproject.org/about/overview.html.en TOR Project - About](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Secure Email====&lt;br /&gt;
&lt;br /&gt;
The Privatix Live-System also came equipped with the security-conscious email client IceDove--an unbranded ThunderBird mail client (a cross-platform email client that provides government-grade security features). The email client was easily setup and used, supporting digital signing and message encryption via certificates by default (as with TOR, this could be disabled if the user wished). &amp;lt;ref name=&amp;quot;icedove&amp;quot;&amp;gt;[http://packages.debian.org/sid/icedove IceDove](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Data Encryption====&lt;br /&gt;
[[File:Encrypt.jpg|thumb|right|Software to encrypt external device]]&lt;br /&gt;
The Privatix Live-System also has the ability to encrypt external devices (besides the external device that the system is installed on). This meant that we could have an unlimited amount of encrypted data, not being limited to the size of the external device that the system itself is installed on. The ability to encrypt secondary external devices is very handy as much of the space on the external device that Privatix is installed on is taken up by the system itself, especially if one fills the device with blank decoy data on installation. The encryption software was easily used, well designed and was able to be utilized by absolute beginners of the system.&lt;br /&gt;
&lt;br /&gt;
====Secure Data Transportation====&lt;br /&gt;
There are two ways that Privatix fulfills its secure data transportation goal:&lt;br /&gt;
# When saving data on the external device with the Privatix Live-System, the data is automatically encrypted and is also password protected (since the portable version of Privatix requires a password to use it). &amp;lt;ref name=&amp;quot;Privatix FAQ&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/index.html.en Privatix FAQ](Last accessed 10-10-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
# As mentioned above, Privatix allows for the encryption of secondary external devices, hence meaning that data can be securely transported without even having the Privatix Live-System with you.&lt;br /&gt;
&lt;br /&gt;
====General Use====&lt;br /&gt;
&lt;br /&gt;
Even with the additional security features not available in other distributions, Privatix would still be a very desirable live system to use. It is portable, especially once installed on an external device, and easily used with little bloatware. The default applications such as OpenOffice for data editing, Pidgin for instant messaging, various graphics editors, video player, and CD burner/extractor ensured that they system was still perfectly functional for everyday use, even with security, not intense functionality, being the main focus.&lt;br /&gt;
&lt;br /&gt;
===Live CD and Virtual Box===&lt;br /&gt;
&lt;br /&gt;
We found that running Privatix using the live CD and VirtualBox was equivalent. &lt;br /&gt;
&lt;br /&gt;
When booting the live CD in VirtualBox, there are certain key features of the Privatix Live-System you are missing (mainly because these features are meant for the portable version to be installed on an external device). However, just booting from the live CD still gives a lot of the functionality I would use the system for--mainly the anonymous web browsing, secure email and data encryption. The key differences were the lack of portability and the inability to save any data on the live CD or VirtualBox environment.&lt;br /&gt;
&lt;br /&gt;
When using only the live CD or VirtualBox all files are deleted when the system is shut down. In addition to, any files saved to the desktop by the user will not appear.  They will be hidden from view, but can be viewed by opening the terminal and navigating to the desktop and running the ls command.&lt;br /&gt;
&lt;br /&gt;
The main flaw in only using the system in these mediums is that the added protection of having a user account and password to access the system are no present on the live CD or when using Privatix in a virtual machine. This is due to the fact that Privatix does not implement user accounts and password protection until it has been fully installed onto an external device. As the main goal of this distribution is privacy, it would be highly recommended that the user fully install the OS onto an external device for the added security of password protection.&lt;br /&gt;
&lt;br /&gt;
==Usage Evaluation==&lt;br /&gt;
&lt;br /&gt;
During our use of Privatix, we found it preformed on par for what it was described as, a secure and portable system. The tools provided to encrypt data and the secure browser with add-ons for anonymity especially supported this belief. However we also found some parts of the distribution that were a cause for concern.  To begin with, there was a slight language barrier as the system was originally written in German. This was made apparent by the frequent grammar mistakes in both the existing English documentation and the operating system itself indicating that English was not the primary language for the writers of this operating system. Most of the documentation for the operating system is also in German. Those who maintain Privatix and its project website are in the process of translating all their documentation as to be available in both English and German, though currently most of the supporting documentation and FAQ are in German. This made it hard to troubleshoot anything that went wrong with the system during installation or use. &lt;br /&gt;
&lt;br /&gt;
We also noticed that there was no wireless drivers on either portable versions of the OS (installed on an external device or simply using the Live CD or booting up in a virtual machine) so wireless networks could not be connected to.  This causes a problem because an operating system on a USB stick should be completely portable, however this driver requires you to have a hard line to use the Internet. It was also noticed that when using Privatix in VirtualBox that even though there was no wireless drivers in Privatix, the wireless capability was provided by the host OS (Windows).&lt;br /&gt;
&lt;br /&gt;
Lastly, when we tried to install Privatix onto a USB it took several attempts. We discovered that to avoid many of the problems we encountered, it is better to use a larger (preferably at least 8GBs) external device for installation and to defer from filling the external device with blank decoy data during installation on an external device.&lt;br /&gt;
&lt;br /&gt;
However once connected to the Internet, all software seems to work as it should. The more basic applications such as OpenOffice, the instant messaging and email clients, multimedia applications etc. function with no problems encountered, working much as they do in any other Linux distribution. The security tools also seem to work as they should. However since we do not know how to test the limits of its security measures we do not know for sure how secure these programs actually are. Overall, Privatix seems to be a very functional and portable distribution, allowing users access to standard applications for tasks such as editing and transporting data, sending/receiving email, instant messaging and multimedia applications with the added bonus of being completely secure and anonymous.&lt;br /&gt;
&lt;br /&gt;
=Part 2=&lt;br /&gt;
==Software Packaging==&lt;br /&gt;
[[File:dpkg_out.png|thumb|right|Package listing, dpkg]]&lt;br /&gt;
[[File:aptitude_out.png|thumb|right|Package listing, aptitude]]&lt;br /&gt;
The packaging format that was used for the Privatix-Live System was DEB (based on the Debian packaging format). &amp;lt;ref name=&amp;quot;privatix distrowatch&amp;quot;&amp;gt;[http://distrowatch.com/table.php?distribution=privatix Privatix Distrowatch Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The utilities used with this packaging format were dpkg and aptitude. Dpkg is used as the operating system&#039;s package management utility, with aptitude acting as the more user-friendly front end version. Aptitude made finding a list of installed packages quite easy. Aptitude allows you to see a full list of installed packages, with the packages being segregated into categories such as mail, web, shells and utils. As well as using aptitude, the command line can be used to access a list of installed packages. To do this, input the following in terminal and a list of all installed packages is generated. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -l&lt;br /&gt;
&lt;br /&gt;
Though knowing how to do this in command line is useful, we found that using aptitude was generally better as the packages are segregated into categories which made viewing the list of installed packages more simple. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To add a package within Privatix, we found the easiest way was to use the one of the following commands provided by dpkg:&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -i &amp;lt;package name&amp;gt;&lt;br /&gt;
          or &lt;br /&gt;
 $ dpkg --install &amp;lt;package name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These commands function in the same way which means they will either install a package, or upgrade already installed versions of the package. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To remove a  package within Privatix, we found the easist way was to use either of the following commands provided by dpkg:&lt;br /&gt;
&lt;br /&gt;
 $ dpkg -r &amp;lt;package name&amp;gt;&lt;br /&gt;
          or &lt;br /&gt;
 $ dpkg -P &amp;lt;package name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When using &amp;quot;dpkg -r &amp;lt;package name&amp;gt;&amp;quot;, everything related to the package &#039;&#039;except&#039;&#039; the configuration files are removed. To fully remove a package, however, we used &amp;quot;dpkg -P &amp;lt;package name&amp;gt;&amp;quot; which removes the entire package, including the configuration files. &amp;lt;ref name=&amp;quot;dpkg man page&amp;quot;&amp;gt;[http://manpages.ubuntu.com/manpages/lucid/man1/dpkg.1.html Dpkg Man Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We found that the software catalog for this distribution was quite extensive, especially since this distribution is meant to be portable. Privatix includes all the standard packages included with Debian (e.g. libc), as well as several other utilities meant to increase security and privacy while using the system such as IceDove, TOR and TORButton.&lt;br /&gt;
&lt;br /&gt;
==Major Package Versions==&lt;br /&gt;
&lt;br /&gt;
For this section of the report, we needed to determine how heavily modified by the distribution&#039;s author packages included within our distribution were. However, the distribution&#039;s author has stated that everything included is mainly based on Debian. &amp;lt;ref name=&amp;quot;privatix documentation&amp;quot;&amp;gt;[http://www.mandalka.name/privatix/doc.html Privatix Documentation (German)](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The packages within Privatix have not been modified, the distribution&#039;s author has mainly brought together several security and privacy conscious utilities into one distribution for portable and daily use. As such, many of the packages that come with the standard install of Privatix have been included since they are included with the standard install of Debian at the time this distribution was made. Please also note that this reference was taken from the main page of the distribution but that, to view it, you will need to translate it (we used Google translate) as much of the documentation for this distribution is in German. &lt;br /&gt;
&lt;br /&gt;
===Linux Kernel===&lt;br /&gt;
The packages relating to the kernel of the system we found were:&lt;br /&gt;
* linux-base: version 2.6.32-31&lt;br /&gt;
* linux-image-2.6.32-5-686: version 2.6.32-31&lt;br /&gt;
* linux-image-2.6-282: version 2.6.32+39&lt;br /&gt;
&lt;br /&gt;
This version of the kernel was released in December of 2009, making it just under two years old. &amp;lt;ref name=&amp;quot;linux kernel&amp;quot;&amp;gt;[http://kernelnewbies.org/Linux_2_6_32 Linux Kernel v2.6.32 Info Page](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version of the Linux kernel was released just yesterday (11/11/2011), this version being listed as 3.1.1. &amp;lt;ref name=&amp;quot;current kernel&amp;quot;&amp;gt;[http://www.kernel.org/ Current Stable Linux Kernel](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This puts the version of the Linux kernel on Privatix as being two years behind the current stable version of the Linux kernel. We believe that these packages were included within the distribution as they are the standard packages for the Linux kernel included in the standard install of Debian. &lt;br /&gt;
&lt;br /&gt;
Please note that we treated these 3 packages as one entity as they pertained to the kernel. There was no upstream source (URL) included in the man pages of these packages.&lt;br /&gt;
&lt;br /&gt;
===Libc===&lt;br /&gt;
The packages pertaining to libc that came with the standard install of Privatix we found were:&lt;br /&gt;
* libc-bin: version 2.11.2-10 &lt;br /&gt;
* libc6: version 2.11.2-10&lt;br /&gt;
&lt;br /&gt;
This version of libc was released in January 2011, making it approximately 11 months old.&amp;lt;ref name=&amp;quot;eglibc&amp;quot;&amp;gt;[http://packages.qa.debian.org/e/eglibc.html eglibc Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This version is also the current stable version of libc, as listed on Debian. &amp;lt;ref name=&amp;quot;eglibc&amp;quot;&amp;gt;[http://packages.qa.debian.org/e/eglibc.html eglibc Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, a newer, unstable version (version 2.13-21), is currently undergoing testing. This package was included as it is also included in the standard install of Debian as well as that all Linux-based systems come with a version of libc.&lt;br /&gt;
&lt;br /&gt;
The upstream source (URL) of these packages was [http://www.eglibc.org eglibc].&lt;br /&gt;
&lt;br /&gt;
===Shell===&lt;br /&gt;
The version of the shell included with the standard install of Privatix was:&lt;br /&gt;
* bash: version 4.1-3&lt;br /&gt;
&lt;br /&gt;
This version of bash was released in, approximately, April 2010. &amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; It is also the current stable version of bash. &amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, last month, version 4.2 of bash was pushed into testing and became the current experimental version.&amp;lt;ref name=&amp;quot;bash&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/bash.html bash Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This package was included as bash is the version of command line included with the standard install of Debian.&lt;br /&gt;
&lt;br /&gt;
The upstream source (URL) of this package was [http://tiswww.case.edu/php/chet/bash/bashtop.html tiswww bash].&lt;br /&gt;
&lt;br /&gt;
===Utilities===&lt;br /&gt;
For this section, we chose to study the busybox package included within the standard install. The package for busybox we found was:&lt;br /&gt;
* busybox: version 1:1.17.1-8&lt;br /&gt;
&lt;br /&gt;
This version of busybox was released in, approximately, November 2010. &amp;lt;ref name=&amp;quot;busybox&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/busybox.html busybox Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; It is also the current stable version of busybox as listed on Debian. &amp;lt;ref name=&amp;quot;busybox&amp;quot;&amp;gt;[http://packages.qa.debian.org/b/busybox.html busybox Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This package was included as it the version of busybox included with the standard install of Debian.&lt;br /&gt;
&lt;br /&gt;
There was no upstream source (URL) included within the man page of busybox within the system.&lt;br /&gt;
&lt;br /&gt;
===Software Packaging===&lt;br /&gt;
The main utility used for package management within Privatix was dpkg. The version of dpkg included with the standard install of this distribution is: &lt;br /&gt;
* dpkg: version 1.15.8.10&lt;br /&gt;
&lt;br /&gt;
This version of dpkg was released in February of 2011, making it 10 months old. &amp;lt;ref name=&amp;quot;dpkg changelog&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/dpkg/1.15.8.10ubuntu1 Dpkg Changelog](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The current stable version of dpkg, as listed on Debian, is version 1.15.8.11 which was released in April of 2011. &amp;lt;ref name=&amp;quot;dpkg&amp;quot;&amp;gt;[http://packages.qa.debian.org/d/dpkg.html Dpkg Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of dpkg included with Privatix at 3 months behind the latest stable version. This package was included since dpkg is the package management system of Debian, the distribution that Privatix is based off of.&lt;br /&gt;
&lt;br /&gt;
The upstream source (URL) for this package was [http://wiki.debian.org/Teams/Dpkg Dpkg on Debian]&lt;br /&gt;
&lt;br /&gt;
===Web Browser===&lt;br /&gt;
&#039;&#039;&#039;IceWeasel&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The web browser included with the standard install of Privatix is IceWeasel, with the version of IceWeasel being:&lt;br /&gt;
* iceweasel: 3.5.16-6&lt;br /&gt;
&lt;br /&gt;
This version of IceWeasel was released in March 2011, making it 9 months old. &amp;lt;ref name=&amp;quot;iceweasel&amp;quot;&amp;gt;[http://packages.qa.debian.org/i/iceweasel.html IceWeasel Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version is version 3.5.16-11 which was released in November 2011. &amp;lt;ref name=&amp;quot;iceweasel&amp;quot;&amp;gt;[http://packages.qa.debian.org/i/iceweasel.html IceWeasel Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of IceWeasel included with Privatix at 9 months behind the latest stable release. IceWeasel was included within this distribution as it is a more security conscious browser than more mainstream browsers such as Mozilla Firefox. IceWeasel, an older version of GNU IceCat (a rebranding of FireFox), comes equipped with security features not available by default in FireFox.&lt;br /&gt;
&lt;br /&gt;
There was no upstream source (URL) provided in the man pages the iceweasel package within the system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;TOR&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The web browser also comes equipped with the program &amp;quot;The Onion Router&amp;quot; (TOR). The version of TOR that comes with the standard install of Privatix is:&lt;br /&gt;
* tor: 0.201029-1&lt;br /&gt;
&lt;br /&gt;
This version of TOR was released in January 2011, making it 11 months old. &amp;lt;ref name=&amp;quot;tor changelog&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/tor/0.2.1.29-1 TOR Changelog](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The latest stable release of TOR is version 0.2.1.30-1 which was released in July 2011. &amp;lt;ref name=&amp;quot;tor&amp;quot;&amp;gt;[https://launchpad.net/ubuntu/+source/tor TOR on LaunchPad](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This package was included to help increase security, anonymity and privacy while web browsing which is one of the main goals of the Privatix distribution. For more information on TOR, see the Basic Operation section of the report, under Anonymous Web Browsing. &lt;br /&gt;
&lt;br /&gt;
The upstream source (URL) provided with this package was [https://www.torproject.org TOR Project].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;TOR Button&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A program included to add to the functionality of TOR, is the TOR Button. The version of TOR Button that comes with the standard install of Privatix is:&lt;br /&gt;
* xul-ext-torbutton: version 1.2.5-3&lt;br /&gt;
&lt;br /&gt;
This version of TOR Button was released in October 2010, making it just over a year old.&amp;lt;ref name=&amp;quot;torbutton&amp;quot;&amp;gt;[https://www.torproject.org/torbutton/ TORButton](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; The newest stable version of this program is version 1.4.4.1 which was released last month. &amp;lt;ref name=&amp;quot;torbutton&amp;quot;&amp;gt;[https://www.torproject.org/torbutton/ TORButton](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This would put the version of TORButton included with Privatix at about a year behind the latest stable release. This package was included in order to add to the functionality of TOR. This add-on allows the user to enable and disable TOR with the push of a button, located in the corner of their browser.&lt;br /&gt;
&lt;br /&gt;
The upstream source (URL) provided with this package was [https://www.torproject.org/torbutton/ TOR Button Project].&lt;br /&gt;
&lt;br /&gt;
===Email===&lt;br /&gt;
&lt;br /&gt;
The default email client provided with the standard install of Privatix is IceDove, with the included version being:&lt;br /&gt;
* icedove: 3.0.11-1+s&lt;br /&gt;
&lt;br /&gt;
This version of IceDove is the current stable version as listed on Debian. &amp;lt;ref name=&amp;quot;icedove debian&amp;quot;&amp;gt;[http://packages.qadebian.org/i/icedove.html IceDove Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; However, this version was included within Privatix before it was made stable.  It was released as an unstable version in December 2010 and was later released as the current stable version in October 2011. &amp;lt;ref name=&amp;quot;icedove debian&amp;quot;&amp;gt;[http://packages.qadebian.org/i/icedove.html IceDove Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This email client was included due to the fact that it is a more security-conscious email client, providing government-grade security features, than others such as the regular version of ThunderBird. For more information on this program, refer to the Basic Operation section of this report under Secure Email.&lt;br /&gt;
&lt;br /&gt;
There was no upstream source (URL) provided with this package.&lt;br /&gt;
&lt;br /&gt;
===Other===&lt;br /&gt;
&#039;&#039;&#039;Pidgin&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The default IM client included with the standard install of Privatix was Pidgin, with the version being:&lt;br /&gt;
* pidgin: 2.7.3.1+sq&lt;br /&gt;
&lt;br /&gt;
This version of the Pidgin was released in October 2010, and is also the current stable version of Pidgin as listed on Debian. &amp;lt;ref name=&amp;quot;pidgin&amp;quot;&amp;gt;[http://packages.qa.debian.org/p/pidgin.html Pidgin Source Package on Debian](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; This package was included as Pidgin is the default IM client included with the standard install of Debian, the system on which Privatix is based.&lt;br /&gt;
&lt;br /&gt;
The upstream source (URL) provided with this package was [http://www.pidgin.im Pidgin].&lt;br /&gt;
&lt;br /&gt;
==Initialization==&lt;br /&gt;
[[File:init.png|thumb|right|List of Processes]]&lt;br /&gt;
Privatix generally follows the same initialization process as Debian. Privatix initializes by first executing first the BIOS then the boot loader code. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; Privatix uses the same boot loader as Debian which is System V initialization. /etc/inittab is the configuration file, with the /sbin/init program initializing the system following the description in this configuration file. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; inittab will set the default run level of Privatix, which is run level 2. Following this, all the scripts located in /etc/rc2.d are executed alphabetically. &amp;lt;ref name=&amp;quot;debian boot process&amp;quot;&amp;gt;[http://wiki.debian.org/BootProcess#System_Initialization Debian Boot Process](Last accessed 12-11-11)&amp;lt;/ref&amp;gt; These scripts are:&lt;br /&gt;
* &#039;&#039;&#039;S01polipo&#039;&#039;&#039;: polipo web cache--a small and fast caching web proxy&lt;br /&gt;
* &#039;&#039;&#039;S01rsyslog&#039;&#039;&#039;: enhanced multi-thread syslogd which is Linux system logging utility&lt;br /&gt;
* &#039;&#039;&#039;S01sudo&#039;&#039;&#039;: provides sudo&lt;br /&gt;
* &#039;&#039;&#039;S02cron&#039;&#039;&#039;: starts the scheduler of the system&lt;br /&gt;
* &#039;&#039;&#039;S02dbus&#039;&#039;&#039;: utility to send messages between processes and applications&lt;br /&gt;
* &#039;&#039;&#039;S02rsync&#039;&#039;&#039;: opens rsync--a program that allows files to be copied to and from remote machines&lt;br /&gt;
* &#039;&#039;&#039;S02tor&#039;&#039;&#039;: starts TOR (for more information, see above)&lt;br /&gt;
* &#039;&#039;&#039;S03avahi-daemon&#039;&#039;&#039;: starts the zeroconf daemon which is used for configuring the network automatically&lt;br /&gt;
* &#039;&#039;&#039;S03bluetooth&#039;&#039;&#039;: launches bluetooth&lt;br /&gt;
* &#039;&#039;&#039;S03networ-manager&#039;&#039;&#039;: starts a daemon that automatically switches network connections to the best available connection&lt;br /&gt;
* &#039;&#039;&#039;S04openvpn&#039;&#039;&#039;: starts openvpn service--a generic vpn service&lt;br /&gt;
* &#039;&#039;&#039;S05gdm3&#039;&#039;&#039;: script for the GNOME display manager&lt;br /&gt;
* &#039;&#039;&#039;S06bootlogs&#039;&#039;&#039;: the log file handling to be done during bootup--mainly things that don&#039;t need to be done particularly early in the boot process&lt;br /&gt;
* &#039;&#039;&#039;S07rc.local&#039;&#039;&#039;: runs the /etc/rc.local file if it exists--by default this script does nothing, it is used only to exit&lt;br /&gt;
* &#039;&#039;&#039;S07rmologin&#039;&#039;&#039;: removes the /etc/nologin file as the last step in the boot process&lt;br /&gt;
* &#039;&#039;&#039;S07stop-bootlogd&#039;&#039;&#039;: runs the /etc/rc.local file again, if it exists--by default this script does nothing, it is used only to exit&lt;br /&gt;
&lt;br /&gt;
Following this, the system is initialized.&lt;br /&gt;
&lt;br /&gt;
We found this information by first confirming that Privatix used the same style of initializing as Debian. Once we ascertained this, we researched the Debian boot process. Privatix followed the same steps up until the loading of the scripts, which had some scripts that differed from Debian (e.g. TOR). Following this, we researched each of the scripts run on Privatix&#039;s default boot level of 2. The scripts are listed above, in the order they execute. To find the purpose of each of the scripts and what programs they opened, we manually went through each of the scripts.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Gbooth</name></author>
	</entry>
</feed>