<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Eapache</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Eapache"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Eapache"/>
	<updated>2026-04-03T19:05:03Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_18&amp;diff=19070</id>
		<title>DistOS 2014W Lecture 18</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_18&amp;diff=19070"/>
		<updated>2014-04-24T14:41:51Z</updated>

		<summary type="html">&lt;p&gt;Eapache: /* Tapestry: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Distributed Hash Tables (March 18)==&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Distributed_hash_table Wikipedia&#039;s article on Distributed Hash Tables]&lt;br /&gt;
* [http://pdos.csail.mit.edu/~strib/docs/tapestry/tapestry_jsac03.pdf Zhao et al, &amp;quot;Tapestry: A Resilient Global-Scale Overlay for Service Deployment&amp;quot; (JSAC 2003)]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Distributed Hash Table Overview ==&lt;br /&gt;
&lt;br /&gt;
A Distributed Hash Table (DHT) is a fast lookup structure of &amp;lt;key,value&amp;gt; pairs,&lt;br /&gt;
distributed across many nodes in a network.  Keys are hashed to generate the &lt;br /&gt;
index at which the value can be found.  Depending on the nature of the hash &lt;br /&gt;
function, typically, only exact queries may be returned.  &lt;br /&gt;
&lt;br /&gt;
Usually, each node has a partial view of &lt;br /&gt;
the hash table, as opposed to a full replica. They don&#039;t know exactly which other node is responsible for a given key.  This has given rise to a number &lt;br /&gt;
of different routing techniques:&lt;br /&gt;
* A centralized server may maintain a list of all keys and associated nodes at which the value can be found.  This method involves a single point of failure.&lt;br /&gt;
** eg. Napster&lt;br /&gt;
* Flooding: Each node may query all connected nodes.  This method has performance and scalability shortcomings but had the benefit of being decentralized.&lt;br /&gt;
** eg. Gnutella&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Consistent_hashing Consistent Hashing] The keyspace can be partitioned such that nodes will maintain the values for keys that hash to similar indices (e.g., within a certain hamming distance). Given a query, nodes do not know specifically on which node a key is located, but they do know a few nodes (a proper subset of the network) located &amp;quot;closer&amp;quot; to the key. The query then continues onto the closest node. This seems to be the most popular technique for DHTs. It&#039;s biggest benefit is that nodes can be added and removed without notifying every other node on the network.&lt;br /&gt;
** eg. Tapestry&lt;br /&gt;
&lt;br /&gt;
==Tapestry:==&lt;br /&gt;
Tapestry is an overlay network which makes use of a DHT to provide routing for&lt;br /&gt;
distributed applications.  Similar to IP routing, not all nodes need to be &lt;br /&gt;
directly connected to each other: they can query a subset of neighbours for&lt;br /&gt;
information about which nodes are responsible for certain parts of the keyspace.&lt;br /&gt;
Routing is performed in such a way that nodes are aware of their &#039;&#039;distance&#039;&#039;&lt;br /&gt;
to the object being queried.  Hence objects can be located with low latency&lt;br /&gt;
without the need to migrate actual object data between nodes. &lt;br /&gt;
&lt;br /&gt;
Tapestry was built for Oceanstore. Oceanstore was built for the open internet. Nodes would be constantly added and removed. Chances are, the network topology would change. That&#039;s why you&#039;d need a dynamic routing system.&lt;br /&gt;
&lt;br /&gt;
In Tapestry, every object and node is identified by a UUID. The system is entirely distributed, decentralized and peer-to-peer. Each node stores a routing table of its various neighbours by the prefix of their UUID. This lets routing occur digit by digit, effectively turning lookup into the traversal of a distributed [https://en.wikipedia.org/wiki/Radix_tree Radix tree].&lt;br /&gt;
&lt;br /&gt;
* DNS as tree but Tapestry as hercically structured.&lt;br /&gt;
* How does the information flow? Each node has a neighbour table which that contains the neighbour&#039;s number.&lt;br /&gt;
** From initialization, each node has a locally optimal routing table that it maintains&lt;br /&gt;
** Routing happens digit by digit&lt;br /&gt;
&lt;br /&gt;
* Tapestry API:&lt;br /&gt;
** have four operations called PublishObject, UnpublishObject, RouteToObject, RouteToNode.&lt;br /&gt;
** each node has ID and each endpoint object has a GUID (Globally unique identifier).&lt;br /&gt;
&lt;br /&gt;
* Tapestry look like operating system.&lt;br /&gt;
** it has two models,one is built on UDP protocol and the other on TCP protocol.&lt;br /&gt;
&lt;br /&gt;
Fun fact, it is now called [http://current.cs.ucsb.edu/projects/chimera/ Chimera].&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_12&amp;diff=19069</id>
		<title>DistOS 2014W Lecture 12</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_12&amp;diff=19069"/>
		<updated>2014-04-24T14:34:51Z</updated>

		<summary type="html">&lt;p&gt;Eapache: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Chubby (Feb 13)=&lt;br /&gt;
[https://www.usenix.org/legacy/events/osdi06/tech/burrows.html Burrows, The Chubby Lock Service for Loosely-Coupled Distributed Systems (OSDI 2006)]&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Distributed_lock_manager#Google.27s_Chubby_lock_service Chubby], developed at Google, was designed to be a coarse-grained locking service for use within loosely coupled distributed systems (i.e., a network consisting of a high number of small machines). The key contribution was the implementation of Chubby (i.e., there were no new algorithms designed/introduced).&lt;br /&gt;
&lt;br /&gt;
Its purpose is to allow clients to synchronize their activities and to agree on basic information about their environment. It is used to varying degrees by other Google project such as the GFS, MapReduce, and BigTable.&lt;br /&gt;
&lt;br /&gt;
By course grained locking, we mean locking resources for extended lengths of time. For example, electing a primary would handle all access to given data for hours or days.&lt;br /&gt;
&lt;br /&gt;
It is basically a ultra reliable and available file system for very small files that is used as a locking service.&lt;br /&gt;
&lt;br /&gt;
Anil: &amp;quot;Once implemented, Chubby abstracts away all the crazy complicated stuff so you can more easily build your distributed system&amp;quot;. Chubby is a tool that gives Google devs important guarantees to build on.&lt;br /&gt;
&lt;br /&gt;
The open-source equivalent is [https://zookeeper.apache.org/ Apache Zookeeper].&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
The funny thing is that Chubby is essentially a filesystem (with files, file permissions, reading/writing, a hierarchal structure, etc.) with a few caveats. Mainly that any file can act as a reader/writer lock and that only whole file operations are performed (i.e., the whole file is written or read), as the files are quite small (256K max). The main reason for implementing the distributed lock service (Chubby) using file system rather than using may be a library based approach was because of the need to make it an easier to use system.&lt;br /&gt;
&lt;br /&gt;
All the locks are fully advisory, meaning others can &amp;quot;go around&amp;quot; whoever has the lock to access the resource (for reading and, sometimes, writing), as opposed to mandatory, mandatory locks giving completely exclusive access to a resource. The reason why chubby goes for advisory locks is that if a client having a lock ends in a problem for some reason there should be a way to release the lock graciously rather than requiring the entire system to be brought down or rebooted. &lt;br /&gt;
&lt;br /&gt;
It can be noted that Linux also utilizes advisory locks as opposed to Windows, which only utilizes mandatory locks. This could be a shortcoming of Windows as, when anything changes regarding the system, the system must be completely rebooted as the locks on files are never broken. With advisory locks, as in Linux, the system need only be rebooted when the kernel is modified/updated.&lt;br /&gt;
&lt;br /&gt;
Chubby also functions as a name server, but only really for functional names/roles , such as for the mail server or a GFS server (i.e., Chubby is mainly used as a name server for logical/symbolic names for roles). It is a centralized place that maps names to resources. A unified interface to do so. The name-value mappings in Chubby allow for a consistent, real-time, overall view of the entire system.&lt;br /&gt;
&lt;br /&gt;
As a name server, Chubby provides guarantees not given with DNS (e.g., DNS is subject to a stale cache) as Chubby provides a unified view of the way things are in the system. &lt;br /&gt;
&lt;br /&gt;
Chubby was made coarse-grained for scalability as coarse-grained locks give the ability to create a distributed system while the fine-grained locks wouldn&#039;t scale well. It can also be noted that a fine-grained lock could be implemented on top of the coarse-grained locks. The entire point of Chubby was to give ultra-high availability and integrity.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
* Uses [http://en.wikipedia.org/wiki/Paxos_(computer_science) paxos], which is an insanely complicated way of solving the distributed consensus problem.&lt;br /&gt;
** Given many proposed values, it chooses one to be agreed upon.&lt;br /&gt;
** More recently, a new consensus algorithm, [https://raftconsensus.github.io/ raft], has been proven which is much simpler to understand if you&#039;re interested in that sort of thing. Chubby does not use raft, but it could in theory just swap it in place of paxos.&lt;br /&gt;
&lt;br /&gt;
* Master chubby server with 4 slaves (5 servers total make up a Chubby Cell)&lt;br /&gt;
** Master and slaves have all the data.&lt;br /&gt;
** Nothing particularity special about the master&lt;br /&gt;
** If the master fails, one slave is elected as a new master&lt;br /&gt;
&lt;br /&gt;
==use cases==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Discussion==&lt;br /&gt;
&lt;br /&gt;
Where else do we see things such as Chubby? Where would you want this consistent, overall view?&lt;br /&gt;
&lt;br /&gt;
You would want this consistent view in any sort of synchronized set of files across a set of systems, such as Dropbox. The main tenants of Chubby&#039;s design would hold where you would want to make sure there was an online consensus. It should be noted that this is not like version control as, with version control, everyone has their own copy which are all merged later. However, in this type of system, there is only one version available throughout the distributed system. Chubby&#039;s design would differ from Dropbox in that Dropbox is designed so that you can work offline and then synchronize your changes once you are online again (i.e., there can sometimes be more than one version of a file meaning you lack the consistent, overall view given by Chubby).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Anil&#039;s opinion we can think about Chubby as an example of bootstrapping, based on the idea of having/building one good thing to realize your needs rather than adding mechanisms to existing systems.  It is nice to have consistency in the world of distributed systems but it comes with a cost, are you willing to pay for it?  is one main question distributed system designers, users often encounters. In Anil&#039;s view chubby brings down this cost a bit lower. Anil mentioned that one of the main ideas of the Distributed Operating systems course is to understand why you need different algorithms/mechanisms to build a Distributed System rather than looking at the internals of each algorithm in depth.&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_12&amp;diff=19067</id>
		<title>DistOS 2014W Lecture 12</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_12&amp;diff=19067"/>
		<updated>2014-04-24T14:33:40Z</updated>

		<summary type="html">&lt;p&gt;Eapache: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Chubby (Feb 13)=&lt;br /&gt;
[https://www.usenix.org/legacy/events/osdi06/tech/burrows.html Burrows, The Chubby Lock Service for Loosely-Coupled Distributed Systems (OSDI 2006)]&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Distributed_lock_manager#Google.27s_Chubby_lock_service Chubby], developed at Google, was designed to be a coarse-grained locking service for use within loosely coupled distributed systems (i.e., a network consisting of a high number of small machines). The key contribution was the implementation of Chubby (i.e., there were no new algorithms designed/introduced).&lt;br /&gt;
&lt;br /&gt;
Its purpose is to allow clients to synchronize their activities and to agree on basic information about their environment. It is used to varying degrees by other Google project such as the GFS, MapReduce, and BigTable.&lt;br /&gt;
&lt;br /&gt;
By course grained locking, we mean locking resources for extended lengths of time. For example, electing a primary would handle all access to given data for hours or days.&lt;br /&gt;
&lt;br /&gt;
It is basically a ultra reliable and available file system for very small files that is used as a locking service.&lt;br /&gt;
&lt;br /&gt;
Anil: &amp;quot;Once implemented, Chubby abstracts away all the crazy complicated stuff so you can more easily build your distributed system&amp;quot;. Chubby is a tool that gives Google devs important guarantees to build on.&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
The funny thing is that Chubby is essentially a filesystem (with files, file permissions, reading/writing, a hierarchal structure, etc.) with a few caveats. Mainly that any file can act as a reader/writer lock and that only whole file operations are performed (i.e., the whole file is written or read), as the files are quite small (256K max). The main reason for implementing the distributed lock service (Chubby) using file system rather than using may be a library based approach was because of the need to make it an easier to use system.&lt;br /&gt;
&lt;br /&gt;
All the locks are fully advisory, meaning others can &amp;quot;go around&amp;quot; whoever has the lock to access the resource (for reading and, sometimes, writing), as opposed to mandatory, mandatory locks giving completely exclusive access to a resource. The reason why chubby goes for advisory locks is that if a client having a lock ends in a problem for some reason there should be a way to release the lock graciously rather than requiring the entire system to be brought down or rebooted. &lt;br /&gt;
&lt;br /&gt;
It can be noted that Linux also utilizes advisory locks as opposed to Windows, which only utilizes mandatory locks. This could be a shortcoming of Windows as, when anything changes regarding the system, the system must be completely rebooted as the locks on files are never broken. With advisory locks, as in Linux, the system need only be rebooted when the kernel is modified/updated.&lt;br /&gt;
&lt;br /&gt;
Chubby also functions as a name server, but only really for functional names/roles , such as for the mail server or a GFS server (i.e., Chubby is mainly used as a name server for logical/symbolic names for roles). It is a centralized place that maps names to resources. A unified interface to do so. The name-value mappings in Chubby allow for a consistent, real-time, overall view of the entire system.&lt;br /&gt;
&lt;br /&gt;
As a name server, Chubby provides guarantees not given with DNS (e.g., DNS is subject to a stale cache) as Chubby provides a unified view of the way things are in the system. &lt;br /&gt;
&lt;br /&gt;
Chubby was made coarse-grained for scalability as coarse-grained locks give the ability to create a distributed system while the fine-grained locks wouldn&#039;t scale well. It can also be noted that a fine-grained lock could be implemented on top of the coarse-grained locks. The entire point of Chubby was to give ultra-high availability and integrity.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
* Uses [http://en.wikipedia.org/wiki/Paxos_(computer_science) paxos], which is an insanely complicated way of solving the distributed consensus problem.&lt;br /&gt;
** Given many proposed values, it chooses one to be agreed upon.&lt;br /&gt;
** More recently, a new consensus algorithm, [https://raftconsensus.github.io/ raft], has been proven which is much simpler to understand if you&#039;re interested in that sort of thing. Chubby does not use raft, but it could in theory just swap it in place of paxos.&lt;br /&gt;
&lt;br /&gt;
* Master chubby server with 4 slaves (5 servers total make up a Chubby Cell)&lt;br /&gt;
** Master and slaves have all the data.&lt;br /&gt;
** Nothing particularity special about the master&lt;br /&gt;
** If the master fails, one slave is elected as a new master&lt;br /&gt;
&lt;br /&gt;
==use cases==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Discussion==&lt;br /&gt;
&lt;br /&gt;
Where else do we see things such as Chubby? Where would you want this consistent, overall view?&lt;br /&gt;
&lt;br /&gt;
You would want this consistent view in any sort of synchronized set of files across a set of systems, such as Dropbox. The main tenants of Chubby&#039;s design would hold where you would want to make sure there was an online consensus. It should be noted that this is not like version control as, with version control, everyone has their own copy which are all merged later. However, in this type of system, there is only one version available throughout the distributed system. Chubby&#039;s design would differ from Dropbox in that Dropbox is designed so that you can work offline and then synchronize your changes once you are online again (i.e., there can sometimes be more than one version of a file meaning you lack the consistent, overall view given by Chubby).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Anil&#039;s opinion we can think about Chubby as an example of bootstrapping, based on the idea of having/building one good thing to realize your needs rather than adding mechanisms to existing systems.  It is nice to have consistency in the world of distributed systems but it comes with a cost, are you willing to pay for it?  is one main question distributed system designers, users often encounters. In Anil&#039;s view chubby brings down this cost a bit lower. Anil mentioned that one of the main ideas of the Distributed Operating systems course is to understand why you need different algorithms/mechanisms to build a Distributed System rather than looking at the internals of each algorithm in depth.&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_8&amp;diff=19050</id>
		<title>DistOS 2014W Lecture 8</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_8&amp;diff=19050"/>
		<updated>2014-04-23T17:33:56Z</updated>

		<summary type="html">&lt;p&gt;Eapache: /* Class Discussion: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==NFS and AFS (Jan 30)==&lt;br /&gt;
&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2008-02-11/sandberg-nfs.pdf Russel Sandberg et al., &amp;quot;Design and Implementation of the Sun Network Filesystem&amp;quot; (1985)]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2008-02-11/howard-afs.pdf John H. Howard et al., &amp;quot;Scale and Performance in a Distributed File System&amp;quot; (1988)]&lt;br /&gt;
&lt;br /&gt;
==NFS==&lt;br /&gt;
Group 1:&lt;br /&gt;
&lt;br /&gt;
1) per operation traffic.&lt;br /&gt;
&lt;br /&gt;
2) rpc based. Easy with which to program but a very [http://www.joelonsoftware.com/articles/LeakyAbstractions.html leaky abstraction].&lt;br /&gt;
&lt;br /&gt;
3) unreliable&lt;br /&gt;
&lt;br /&gt;
Group 2:&lt;br /&gt;
&lt;br /&gt;
1) designed to share disks over a network, not files&lt;br /&gt;
&lt;br /&gt;
2) more UNIX like. They tried to maintain unix file semantics on the client and server side.&lt;br /&gt;
&lt;br /&gt;
3) portable. It was meant to work (as a server) across many FS types.&lt;br /&gt;
&lt;br /&gt;
4) used UDP: if request dropped, just request again.&lt;br /&gt;
&lt;br /&gt;
5) it is not minimize network traffic.&lt;br /&gt;
&lt;br /&gt;
6) used VNODE, VFS as transparent interfaces to local disks.&lt;br /&gt;
&lt;br /&gt;
7) not have much hardware equipment&lt;br /&gt;
&lt;br /&gt;
8) later versions took on features of AFS&lt;br /&gt;
&lt;br /&gt;
9) stateless protocol conflicts with files being stateful by nature.&lt;br /&gt;
&lt;br /&gt;
Group 3:&lt;br /&gt;
&lt;br /&gt;
1) cache assumption invalid.&lt;br /&gt;
&lt;br /&gt;
2) no dedicated locking mechanism. They couldn&#039;t decide on which locking strategy to use, so they left it up to the users of NFS to use their own separate locking service.&lt;br /&gt;
&lt;br /&gt;
3) bad security&lt;br /&gt;
&lt;br /&gt;
Other:&lt;br /&gt;
* Client mounts full FS. No common namespace.&lt;br /&gt;
* Hostname lookup and address binding happens at mount&lt;br /&gt;
&lt;br /&gt;
==AFS==&lt;br /&gt;
&lt;br /&gt;
Group 1&lt;br /&gt;
&lt;br /&gt;
1) design for 5000 to 10000 clients&lt;br /&gt;
&lt;br /&gt;
2) high integrity.&lt;br /&gt;
&lt;br /&gt;
Group 2&lt;br /&gt;
&lt;br /&gt;
1) designed to share files over a network, not disks. It is one FS.&lt;br /&gt;
&lt;br /&gt;
2) better scalability&lt;br /&gt;
&lt;br /&gt;
3) better security (Kerberos).&lt;br /&gt;
&lt;br /&gt;
4) minimize network traffic.&lt;br /&gt;
&lt;br /&gt;
5) less UNIX like&lt;br /&gt;
&lt;br /&gt;
6) plugin authentication&lt;br /&gt;
&lt;br /&gt;
7) needs more kernel storage due to complex commands&lt;br /&gt;
&lt;br /&gt;
8) inode concept replaced with fid&lt;br /&gt;
&lt;br /&gt;
Group 3&lt;br /&gt;
&lt;br /&gt;
1) cache assumption valid&lt;br /&gt;
&lt;br /&gt;
2) locking&lt;br /&gt;
&lt;br /&gt;
3) good security.&lt;br /&gt;
&lt;br /&gt;
Other:&lt;br /&gt;
* Caches full files locally on open. Sends diffs on close.&lt;br /&gt;
&lt;br /&gt;
==Class Discussion:== &lt;br /&gt;
&lt;br /&gt;
NFS and AFS took substantially different approaches to the many problems they faced; while we consider AFS to have made generally better choices in this respect, it was not widely adopted because it was complex and difficult to setup/administer/maintain. NFS however, was comparatively simple. Its protocol and API were relatively stateless (thus it used UDP) and it shared information at the file level rather than the block level. It was also built on RPC, which was convenient to program in but was (as we have already discussed) a bad abstraction since it hid the inherent flakiness of the network. This use of RPC led to security and reliability problems with NFS.&lt;br /&gt;
&lt;br /&gt;
AFS took a more thorough approach to figuring out coherent consistency guarantees and how to implement them efficiently. The AFS designers considered the network as a bottle neck and tried to reduce the amount of chatter over network by making heavy use of caching. The &#039;open&#039; and &#039;close&#039; operations in AFS were critical, assuming importance similar in proportion to &#039;commit&#039; operations in a well-designed database system. The security model of AFS was also interesting in that rather than going for the UNIX access list based implementation AFS used a single sign on system based on Kerberos.&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_6&amp;diff=19049</id>
		<title>DistOS 2014W Lecture 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_6&amp;diff=19049"/>
		<updated>2014-04-23T17:23:42Z</updated>

		<summary type="html">&lt;p&gt;Eapache: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &#039;&#039;&#039;the point form notes for this lecture could be turned into full sentences/paragraphs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==The Early Web (Jan. 23)==&lt;br /&gt;
&lt;br /&gt;
* [https://archive.org/details/02Kahle000673 Berners-Lee et al., &amp;quot;World-Wide Web: The Information Universe&amp;quot; (1992)], pp. 52-58&lt;br /&gt;
* [http://www.youtube.com/watch?v=72nfrhXroo8 Alex Wright, &amp;quot;The Web That Wasn&#039;t&amp;quot; (2007)], Google Tech Talk&lt;br /&gt;
&lt;br /&gt;
== Group Discussion on &amp;quot;The Early Web&amp;quot; ==&lt;br /&gt;
&lt;br /&gt;
Questions to discuss:&lt;br /&gt;
&lt;br /&gt;
# How do you think the web would have been if not like the present way? &lt;br /&gt;
# What kind of infrastructure changes would you like to make? &lt;br /&gt;
&lt;br /&gt;
=== Group 1 ===&lt;br /&gt;
: Relatively satisfied with the present structure of the web some changes suggested are in the below areas: &lt;br /&gt;
* Make use of the greater potential of Protocols &lt;br /&gt;
* More communication and interaction capabilities.&lt;br /&gt;
* Implementation changes in the present payment method systems. Example usage of &amp;quot;Micro-computation&amp;quot; - a discussion we would get back to in future classes. Also, Cryptographic currencies.&lt;br /&gt;
* Augmented reality.&lt;br /&gt;
* More towards individual privacy. &lt;br /&gt;
&lt;br /&gt;
=== Group 2 ===&lt;br /&gt;
==== Problem of unstructured information ====&lt;br /&gt;
A large portion of the web serves content that is overwhelmingly concerned about presentation rather than structuring content. Tim Berner-Lees himself bemoaned the death of the semantic web. His original vision of it was as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Code from Wikipedia&#039;s article on the semantic web, except for the block quoting form, which this MediaWiki instance doens&#039;t seem to support. --&amp;gt;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web&amp;amp;nbsp;– the content, links, and transactions between people and computers. A &amp;quot;Semantic Web&amp;quot;, which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The &amp;quot;intelligent agents&amp;quot; people have touted for ages will finally materialize.&amp;lt;ref&amp;gt;{{cite book |last=Berners-Lee |first=Tim |authorlink=Tim Berners-Lee |coauthors=Fischetti, Mark |title=Weaving the Web |publisher=HarperSanFrancisco |year=1999 |pages=chapter 12 |isbn=978-0-06-251587-2 |nopp=true }}&amp;lt;/ref&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For this vision to be true, information arguably needs to be structured, maybe even classified. The idea of a universal information classification system has been floated. The modern web is mostly developed by software developers and similar, not librarians and the like.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- TODO: Yahoo blurb. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Also, how does one differentiate satire from fact?&lt;br /&gt;
&lt;br /&gt;
==== Valuation and deduplication of information ====&lt;br /&gt;
Another problem common with the current web is the duplication of information. Redundancy is not in itself harmful to increase the availability of information, but is ad-hoc duplication of the information itself?&lt;br /&gt;
&lt;br /&gt;
One then comes to the problem of assigning a value to the information found therein. How does one rate information, and according to what criteria? How does one authenticate the information? Often, popularity is used as an indicator of veracity, almost in a sophistic manner. See excessive reliance on Google page ranking or Reddit score for various types of information consumption for research or news consumption respectively.&lt;br /&gt;
&lt;br /&gt;
=== On the current infrastructure ===&lt;br /&gt;
The current &amp;lt;em&amp;gt;internet&amp;lt;/em&amp;gt; infrastructure should remain as is, at least in countries with not just a modicum of freedom of access to information. Centralization of control of access to information is a terrible power. See China and parts of the Middle-East. On that note, what can be said of popular sites, such as Google or Wikipedia that serve as the main entry point for many access patterns?&lt;br /&gt;
&lt;br /&gt;
The problem, if any, in the current web infrastructure is of the web itself, not the internet.&lt;br /&gt;
&lt;br /&gt;
=== Group 3 ===&lt;br /&gt;
* What we want to keep &lt;br /&gt;
** Linking mechanisms&lt;br /&gt;
** Minimum permissions to publish&lt;br /&gt;
* What we don&#039;t like&lt;br /&gt;
** Relying on one source for document &lt;br /&gt;
** Privacy links for security&lt;br /&gt;
* Proposal &lt;br /&gt;
** Peer-peer to distributed mechanisms for documenting&lt;br /&gt;
** Reverse links with caching (distributed cache) that doesn&#039;t compromise anything&lt;br /&gt;
** More availability for user - what happens when system fails? &lt;br /&gt;
** Key management to be considered - Is it good to have centralized or distributed mechanism?&lt;br /&gt;
** Make information centric as opposed to host centric&lt;br /&gt;
&lt;br /&gt;
=== Group 4 ===&lt;br /&gt;
* An idea of web searching for us &lt;br /&gt;
* A suggestion of a different web if it would have been implemented by &amp;quot;AI&amp;quot; people&lt;br /&gt;
** AI programs searching for data - A notion already being implemented by Google slowly.&lt;br /&gt;
* Generate report forums&lt;br /&gt;
* HTML equivalent is inspired by the AI communication&lt;br /&gt;
* Higher semantics apart from just indexing the data&lt;br /&gt;
** Problem : &amp;quot;How to bridge the semantic gap?&amp;quot;&lt;br /&gt;
** Search for more data patterns&lt;br /&gt;
&lt;br /&gt;
== Group design exercise — The web that could be ==&lt;br /&gt;
&lt;br /&gt;
The web as it currently stands (and to the distress of librarians everywhere) is a vast soup of unstructured information. Many ideas for &amp;quot;the web that could be&amp;quot; focus on a web for structured information, but this hits several difficulties:&lt;br /&gt;
&lt;br /&gt;
* Nobody has ever been able to agree on a universal classification system for all potential information.&lt;br /&gt;
* The training overhead of classifiers (e.g., librarians) is high. See the master&#039;s that a librarian would need.&lt;br /&gt;
&lt;br /&gt;
Our current solution is a combination of various technologies including search engines and brute-force indexing, natural language processing, and tagging via the &amp;quot;semantic web&amp;quot;. Unfortunately this system has its own problems:&lt;br /&gt;
&lt;br /&gt;
* The semantic web has effectively died because nobody bothered tagging anything.&lt;br /&gt;
* The problem of information duplication when information gets redistributed across the web. However, we do want redundancy.&lt;br /&gt;
* Too much developed by software developers&lt;br /&gt;
* Too reliant on Google for web structure&lt;br /&gt;
** See search-engine optimization&lt;br /&gt;
* Problem of authentication (of the information, not the presenter)&lt;br /&gt;
** Too dependent at times on the popularity of a site, almost in a sophistic manner.&lt;br /&gt;
** See Reddit&lt;br /&gt;
* How do you programmatically distinguish satire from fact&lt;br /&gt;
&lt;br /&gt;
Other problems discussed include:&lt;br /&gt;
&lt;br /&gt;
* Information doesn&#039;t have the same persistence, see bitrot and Vint Cerf&#039;s talk.&lt;br /&gt;
* Too concerned with presentation now.&lt;br /&gt;
* The web&#039;s structure is also “shaped by inbound links but would be nice a bit more”&lt;br /&gt;
* Infrastructure doesn&#039;t need to change per se.&lt;br /&gt;
** The distributed architecture should still stay. Centralization of control of allowed information and access is terrible power. See China and the Middle-East.&lt;br /&gt;
** Information, for the most part, in itself, exists centrally (as per-page), though communities (to use a generic term) are distributed.&lt;br /&gt;
* Need more sophisticated natural language processing.&lt;br /&gt;
&lt;br /&gt;
== Class discussion ==&lt;br /&gt;
&lt;br /&gt;
Focusing on vision, not the mechanism.&lt;br /&gt;
&lt;br /&gt;
* Reverse linking&lt;br /&gt;
* Distributed content distribution (glorified cache)&lt;br /&gt;
** Both for privacy and redunancy reasons&lt;br /&gt;
** Suggested centralized content certification, but doesn&#039;t address the problem of root of trust and distributed consistency checking.&lt;br /&gt;
*** Distributed key management is a holy grail&lt;br /&gt;
*** What about detecting large-scale subversion attempts, like in China&lt;br /&gt;
* What is the new revenue model?&lt;br /&gt;
** What was TBL&#039;s revenue model (tongue-in-cheek, none)?&lt;br /&gt;
** Organisations like Google monetized the internet, and this mechanism could destroy their ability to do so.&lt;br /&gt;
* Search work is semi-distributed. Suggested letting the web do the work for you.&lt;br /&gt;
* Trying to structure content in a manner simultaneously palatable to both humans and machines.&lt;br /&gt;
* Using spare CPU time on servers for natural language processing (or other AI) of cached or locally available resources.&lt;br /&gt;
* Imagine a smushed Wolfram Alpha, Google, Wikipedia, and Watson, and then distributed over the net.&lt;br /&gt;
* The document was TBL&#039;s idea of the atom of content, whereas nowaday we really need something more granular.&lt;br /&gt;
* We want to extract higher-level semantics.&lt;br /&gt;
* Google may not be pure keyword search anymore. It essentially now uses AI to determine relevancy, but we still struggle with expressing what we want to Google.&lt;br /&gt;
* What about the adversarial aspect of content hosters, vying for attention?&lt;br /&gt;
* People do actively try to fool you.&lt;br /&gt;
* Compare to Google News, though that is very specific to that domain. Their vision is a semantic web, but they are incrementally building it.&lt;br /&gt;
* In a scary fashion, Google is one of the central points of failure of the web. Even scarier is less technically competent people who depend on Facebook for that.&lt;br /&gt;
* There is a semantic gap between how we express and query information, and how AI understands it.&lt;br /&gt;
* Can think of Facebook as a distributed human search infrastructure.&lt;br /&gt;
* The core service/function of an operating system is to locate information. &#039;&#039;&#039;Search is infrastructure.&#039;&#039;&#039;&lt;br /&gt;
* The problem is not purely technical. There are political and social aspects.&lt;br /&gt;
** Searching for a file on a local filesystem should have a unambiguous answer.&lt;br /&gt;
** Asking the web is a different thing. “What is the best chocolate bar?”&lt;br /&gt;
* Is the web a network database, as understood in COMP 3005, which we consider harmful.&lt;br /&gt;
* For two-way links, there is the problem of restructuring data and all the dependencies.&lt;br /&gt;
* Privacy issues when tracing paths across the web.&lt;br /&gt;
* What about the problem of information revocation?&lt;br /&gt;
* Need more augmented reality and distributed and micro payment systems.&lt;br /&gt;
* We need distributed, mutually untrusting social networks.&lt;br /&gt;
** Now we have the problem of storage and computation, but also take away some of of the monetizationable aspect.&lt;br /&gt;
* Distribution is not free. It is very expensive in very funny ways.&lt;br /&gt;
* The dream of harvesting all the computational power of the internet is not new.&lt;br /&gt;
** Startups have come and gone many times over that problem.&lt;br /&gt;
* Google&#039;s indexers understands quite well many documents on the web. However, it only &#039;&#039;&#039;presents&#039;&#039;&#039; a primitive keyword-like interface. It doesn&#039;t expose the ontology.&lt;br /&gt;
* Organising information does not necessarily mean applying an ontology to it.&lt;br /&gt;
* The organisational methods we now use don&#039;t use ontologies, but rather are supplemented by them.&lt;br /&gt;
&lt;br /&gt;
Adding couple of related points Anil mentioned during the discussion:&lt;br /&gt;
Distributed key management is a holy grail no one has ever managed to get it working. Now a days databases have become important building blocks of the Distributed Operating System. Anil stressed the fact that Databases can in fact be considered as an OS service these days. The question “How you navigate the complex information space?” has remained a prominent question that The Web have always faced.&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19048</id>
		<title>DistOS 2014W Lecture 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=19048"/>
		<updated>2014-04-23T17:13:57Z</updated>

		<summary type="html">&lt;p&gt;Eapache: misc cleanup&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=The Mother of all Demos (Jan. 21)=&lt;br /&gt;
&lt;br /&gt;
* [http://www.dougengelbart.org/firsts/dougs-1968-demo.html Doug Engelbart Institute, &amp;quot;Doug&#039;s 1968 Demo&amp;quot;]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/The_Mother_of_All_Demos Wikipedia&#039;s page on &amp;quot;The Mother of all Demos&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
Anil set the theme of the discussion for the week: to try and understand what the early visionaries/researchers wanted the computer to be and what it has become. Putting this in other words, what was considered fundamental those days and where do those stand today? It is to be noted that features that were easier to implement using simple mechanisms were carried forward whereas the ones which demanded more complex systems or the one which were found out to add not much value in the near feature were pegged down in the order. In the same context, the following observations were made: (1) truly distributed computational infrastructure really makes sense only when we have something to distribute (2) use cases drive the large distributed systems, a good example is The Web. Another key observation from Anil was that there was always a Utopian aspect to the early systems be it NLS, ARPANET or Alto; security aspects were never considered essential in those systems, as they were assumed to operate in a trusted environment. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Operating system&lt;br /&gt;
: The software that turns the computer you have into the one you want (Anil)&lt;br /&gt;
&lt;br /&gt;
* What sort of computer did we want to have?&lt;br /&gt;
* What sort of abstractions did they want to be easy? Hard?&lt;br /&gt;
* What could we build with the internet (not just WAN, but also LAN)?&lt;br /&gt;
* Most dreams people had of their computers smacked into the wall of reality.&lt;br /&gt;
&lt;br /&gt;
= MOAD review in groups =&lt;br /&gt;
&lt;br /&gt;
* Chorded keyboard unfortunately obscure, partly because the attendees disagreed with the long-term investment of training the user.&lt;br /&gt;
* View control → hyperlinking system, but in a lightweight (more like nanoweight) markup language.&lt;br /&gt;
* Ad-hoc ticketing system&lt;br /&gt;
* Ad-hoc messaging system&lt;br /&gt;
** Used on a time-sharing systme with shared storage,&lt;br /&gt;
* Primitive revision control system&lt;br /&gt;
* Different vocabulary:&lt;br /&gt;
** Bug and bug smear (mouse and trail)&lt;br /&gt;
** Point rather than click&lt;br /&gt;
&lt;br /&gt;
= Class review =&lt;br /&gt;
&lt;br /&gt;
* Doug died Jul 2 2013&lt;br /&gt;
* Doug himself called it an “online system”, rather than offline composition of code using card punchers as was common in the day.&lt;br /&gt;
* What became of the tech:&lt;br /&gt;
** Chorded keyboards:&lt;br /&gt;
*** Exist but obscure&lt;br /&gt;
** Pre-ARPANET network:&lt;br /&gt;
*** Time-sharing mainframe&lt;br /&gt;
*** 13 workstations&lt;br /&gt;
*** Telephone and television circuit&lt;br /&gt;
** Mouse&lt;br /&gt;
*** “I sometimes apologize for calling it a mouse”&lt;br /&gt;
** Collaborative document editing integrated with screen sharing&lt;br /&gt;
** Videoconferencing&lt;br /&gt;
*** Part of the vision, but more for the demo at the time,&lt;br /&gt;
** Hyperlinks&lt;br /&gt;
*** The web on a mainframe&lt;br /&gt;
** Languages&lt;br /&gt;
*** Metalanguages&lt;br /&gt;
**** “Part and parcel of their entire vision of augmenting human intelligence.”&lt;br /&gt;
**** You must teach the computer about the language you are using.&lt;br /&gt;
**** They were the use case. It was almost designed more for augmenting programmer intelligence rather than human intelligence.&lt;br /&gt;
*** It was normal for the time to build new languages (domain-specific) for new systems. Nowadays, we standardize on one but develop large APIs, at the expense of conciseness. We look for short-term benefits; we minimize programmer effort.&lt;br /&gt;
*** Compiler compiler&lt;br /&gt;
** Freeze-pane&lt;br /&gt;
** Folding—Zoomable UI (ZUI)&lt;br /&gt;
*** Lots of systems do it, but not the default&lt;br /&gt;
*** Much easier to just present everything.&lt;br /&gt;
** Technologies the required further investment got left behind.&lt;br /&gt;
* The NLS had little to no security&lt;br /&gt;
** There was a minimal notion of a user&lt;br /&gt;
** There was a utopian aspect. Meanwhile, the Mac had no utopian aspect. Data exchange was through floppies. Any network was small, local, ad-hoc, and among trusted peers.&lt;br /&gt;
** The system wasn&#039;t envisioned to scale up to masses of people who didn&#039;t trust each other.&lt;br /&gt;
** How do you enforce secrecy.&lt;br /&gt;
* Part of the reason for lack of adoption of some of the tech was hardware. We can posit that a bigger reason would be infrastructure.&lt;br /&gt;
* Differentiate usability of system from usability of vision&lt;br /&gt;
** What was missing was the polish, the ‘sexiness’, and the intuitiveness of later systems like the Apple II and the Lisa.&lt;br /&gt;
** The usability of the later Alto is still less than commercial systems.&lt;br /&gt;
*** The word processor was modal, which is apt to confuse unmotivated and untrained users.&lt;br /&gt;
* In the context of the Mother of All Demos, the Alto doesn&#039;t seem entirely revolutionary. Xerox PARC raided his team. They almost had a GUI; rather they had what we call today a virtual console, with a few things above.&lt;br /&gt;
* What happens with visionaries that present a big vision is that the spectators latch onto specific aspects.&lt;br /&gt;
* To be comfortable with not adopting the vision, one must ostracize the visionary. People pay attention to things that fit into their world view.&lt;br /&gt;
* Use cases of networking have changed little, though the means did&lt;br /&gt;
* Fundamentally a resource-sharing system; everything is shared, unlike later systems where you would need to explicitly do so. Resources shared fundamentally sense to share: documents, printers, etc.&lt;br /&gt;
* Resource sharing was never enough. &#039;&#039;&#039;Information-sharing&#039;&#039;&#039; was the focus.&lt;br /&gt;
&lt;br /&gt;
“Mother of all demos” is nickname for Engelbart who could make the computers help humans become smarter. &lt;br /&gt;
&lt;br /&gt;
*More interesting in this works that:&lt;br /&gt;
&amp;quot;His idea included seeing computing devices as a means to communicate and retrieve information, rather than just crunch numbers. This idea is represented in NLS”On-Line system”.&lt;br /&gt;
&lt;br /&gt;
*Some information about  NLS system:&lt;br /&gt;
1) NLS was a revolutionary computer collaboration system from the 1960s. &lt;br /&gt;
2) Designed by Douglas Engelbart and implemented by researchers at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI). &lt;br /&gt;
3) The NLS system was the first to employ the practical use of :&lt;br /&gt;
  a) hypertext links,&lt;br /&gt;
  b) the mouse, &lt;br /&gt;
  c) raster-scan video monitors, &lt;br /&gt;
  d) information organized by relevance, &lt;br /&gt;
  e) screen windowing, &lt;br /&gt;
  f) presentation programs, &lt;br /&gt;
  g) and other modern computing concepts.&lt;br /&gt;
&lt;br /&gt;
= Alto review =&lt;br /&gt;
&lt;br /&gt;
* Fundamentally a personal computer&lt;br /&gt;
* Applications:&lt;br /&gt;
** Drawing program with curves and arcs for drawing&lt;br /&gt;
** Hardware design tools (mostly logic boards)&lt;br /&gt;
** Time server&lt;br /&gt;
* Less designed for reading than the NLS. More designed around paper. Xerox had a laser printer, and you would read what you printed. Hypertext was deprioritized, unlike the NLS vision had focused on what could not be expressed on paper.&lt;br /&gt;
* Xerox had almost an obsession with making documents print beautifully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Alto vs NLS =&lt;br /&gt;
NLS and Alto both had text processing, drawing, programming environments, some form of email (communication). Alto had WYSIWYG everything.&lt;br /&gt;
&lt;br /&gt;
Alto not built on a mainframe. NLS &#039;resource sharing&#039; was based around the mainframe. Alto had the idea of sharing via the network (ie. printer server).&lt;br /&gt;
&lt;br /&gt;
Alto focused a lot less on &#039;hypertext&#039;. Less about navigating deep information.  It used the paper metaphor. It implemented existing metaphors and adapted them to the PC. Alto people came from a culture that really valued printed paper.&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_1&amp;diff=19035</id>
		<title>DistOS 2014W Lecture 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_1&amp;diff=19035"/>
		<updated>2014-04-20T17:59:53Z</updated>

		<summary type="html">&lt;p&gt;Eapache: cleanup&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is an Operating System? ==&lt;br /&gt;
&lt;br /&gt;
In general, an OS allows you to run the same applications on (slightly) different hardware. Here are a few thoughts on what the responsibilities and functionality are of modern operating systems, and what we expect from something calling itself an OS:&lt;br /&gt;
* A hardware abstraction layer such that diverse hardware resources can be accessed uniformly by software&lt;br /&gt;
* A consistent execution environment, which hardware doesn&#039;t provide (ie. code written to interface &amp;amp;emdash; think portable code)&lt;br /&gt;
* Management of I/O (such as user I/O, machine I/O i.e. network I/O, sensors, videos, etc.)&lt;br /&gt;
* Resource management through multiplexing and policy use&lt;br /&gt;
** Multiplexing (sharing): one resource wanted by multiple users&lt;br /&gt;
* Communication infrastructure (for example inter-process communication) between the users (process, applications) of the operating system.&lt;br /&gt;
* Management of synchronization and concurrency issues&lt;br /&gt;
&lt;br /&gt;
We could also say that an OS turns the computer you have into the computer you want. An OS can be defined by the role it plays in the programming of systems. It takes care of resource management and creates abstraction. An OS turns hardware into the computer/API/interface we &#039;&#039;&#039;want&#039;&#039;&#039; to program.&lt;br /&gt;
&lt;br /&gt;
This is similar to how the browser is becoming the OS of the web. The browser is&lt;br /&gt;
the key abstraction needed to run web apps. It is the interface web developers target.&lt;br /&gt;
It doesn&#039;t matter what you consume a given website on (eg. a phone, tablet,&lt;br /&gt;
etc.), the browser abstracts the device&#039;s hardware and OS away.&lt;br /&gt;
&lt;br /&gt;
== What is a distributed OS? ==&lt;br /&gt;
&lt;br /&gt;
Anil prefers to think of this &#039;logically&#039; rather than functionally/physically.  This is&lt;br /&gt;
because the old distributed operating system (DOS) model applies to systems today which we&lt;br /&gt;
don&#039;t consider distributed (ie. managing multiple cores, etc). The traditional definition is systems that&lt;br /&gt;
manage their resources over a Network.&lt;br /&gt;
&lt;br /&gt;
A lot of these definitions are hard to peg down because simplicity always gets in&lt;br /&gt;
the way of truth. These concepts to do not fit into well defined classes.&lt;br /&gt;
&lt;br /&gt;
To draw parallels to our previous definition of operating systems, a distributed OS takes the distributed pieces of a system, and turn it into the system you want.&lt;br /&gt;
&lt;br /&gt;
It is good to think about about DOSes within the context of who/what is in&lt;br /&gt;
control, in terms of who makes and enforces decisions in DOSes. In essence, who is in charge? The traditional kernel-process model is a dictatorship, an authoritarian&lt;br /&gt;
model of control. The kernel controls what lives or dies.  The internet, in contrast, is decentralised (eg. DNS&amp;amp;emdash;to some extent, ignoring centralized roots). Distributed systems may have distributed&lt;br /&gt;
policies where there is not one source of power. Even in DOS we see instances of authoritarian/centralized approaches one example being the walled garden model employed by Apple&#039;s iOS. Anil&#039;s observation is that centralized systems has an inherent fragility built into and these kind of systems come into existence and disappear after a while. Examples being AOL, Myspace. Even Facebook also looks to be a possible candidate for a similar fate. Also, concentrations on policy will tend to fall apart in the future.&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18798</id>
		<title>DistOS 2014W Lecture 16</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_16&amp;diff=18798"/>
		<updated>2014-03-11T14:25:23Z</updated>

		<summary type="html">&lt;p&gt;Eapache: Created page with &amp;quot;Public Resource Computing&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Public Resource Computing&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_15&amp;diff=18733</id>
		<title>DistOS 2014W Lecture 15</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_15&amp;diff=18733"/>
		<updated>2014-03-06T15:57:02Z</updated>

		<summary type="html">&lt;p&gt;Eapache: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Can we do any kind of distributed system without crypto? We can&#039;t trust crypto...&lt;br /&gt;
&lt;br /&gt;
Perhaps probabilistically...&lt;br /&gt;
&lt;br /&gt;
Want to be able to put data in, have it distributed, and be able to get it out on some other machine.&lt;br /&gt;
&lt;br /&gt;
Availability: &amp;quot;distribute the crap out of it&amp;quot;, doesn&#039;t need crypto.&lt;br /&gt;
&lt;br /&gt;
Integrity: hashing, but we assume hashes can be forged. If we want to know that we got the same file, then simply send each other the file and compare.&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_15&amp;diff=18732</id>
		<title>DistOS 2014W Lecture 15</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_15&amp;diff=18732"/>
		<updated>2014-03-06T15:53:19Z</updated>

		<summary type="html">&lt;p&gt;Eapache: Created page with &amp;quot;Can we do any kind of distributed system without crypto? We can&amp;#039;t trust crypto...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Can we do any kind of distributed system without crypto? We can&#039;t trust crypto...&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_7&amp;diff=18517</id>
		<title>DistOS 2014W Lecture 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_7&amp;diff=18517"/>
		<updated>2014-01-28T15:56:23Z</updated>

		<summary type="html">&lt;p&gt;Eapache: /* Unix and Plan 9 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project ==&lt;br /&gt;
&lt;br /&gt;
We discussed moving the proposal due date back a week. We also discussed spending the class prior to that date discussing the primary papers people had chosen in order to provide preliminary feedback. Anil spent some time going through the papers from OSDI12 and discussing which ones would make good projects and why.&lt;br /&gt;
&lt;br /&gt;
* Pick a primary paper.&lt;br /&gt;
* Find papers that cite that paper, papers it cites, etc. to collect a body of related work.&lt;br /&gt;
* Don&#039;t just give a history, tell a story!&lt;br /&gt;
&lt;br /&gt;
== Unix and Plan 9 ==&lt;br /&gt;
&lt;br /&gt;
UNIX was built as &amp;quot;a castrated version of Multix&amp;quot;, which was a very complex system. Multix was, arguably, so far ahead of its time that we are only just achieving their ambitions now. Unix was much more modest, and therefore much more achievable and successful. Just enough infrastructure to avoid reinventing the wheel. Just a couple of programmers making something for their own use. Unix was not designed as product or commercial entity at all. It was licensed out because AT&amp;amp;T was under severe antitrust scrutiny at the time.&lt;br /&gt;
&lt;br /&gt;
They wanted few, simple abstractions so they made everything a file. Berkeley promptly broke this abstraction by introducing sockets for networking. Plan 9 finally introduced networking using the right abstractions, but was too late. Arguably the reason the BSD folks didn&#039;t use the file abstraction was because of the difference in reliability. Files are generally reliable, and failures with them are catastrophic so many applications simply didn&#039;t include logic to handle such IO errors. Networks are much less reliable and applications have to be able to deal gracefully with timeouts and other errors.&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_7&amp;diff=18516</id>
		<title>DistOS 2014W Lecture 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_7&amp;diff=18516"/>
		<updated>2014-01-28T15:51:38Z</updated>

		<summary type="html">&lt;p&gt;Eapache: /* Unix and Plan 9 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project ==&lt;br /&gt;
&lt;br /&gt;
We discussed moving the proposal due date back a week. We also discussed spending the class prior to that date discussing the primary papers people had chosen in order to provide preliminary feedback. Anil spent some time going through the papers from OSDI12 and discussing which ones would make good projects and why.&lt;br /&gt;
&lt;br /&gt;
* Pick a primary paper.&lt;br /&gt;
* Find papers that cite that paper, papers it cites, etc. to collect a body of related work.&lt;br /&gt;
* Don&#039;t just give a history, tell a story!&lt;br /&gt;
&lt;br /&gt;
== Unix and Plan 9 ==&lt;br /&gt;
&lt;br /&gt;
UNIX was built as &amp;quot;a castrated version of Multix&amp;quot;, which was a very complex system. Multix was, arguably, so far ahead of its time that we are only just achieving their ambitions now. Unix was much more modest, and therefore much more achievable and successful. Just enough infrastructure to avoid reinventing the wheel. Just a couple of programmers making something for their own use. Unix was not designed as product or commercial entity at all. It was licensed out because AT&amp;amp;T was under severe antitrust scrutiny at the time.&lt;br /&gt;
&lt;br /&gt;
They wanted few, simple abstractions so they made everything a file. Berkeley promptly broke this abstraction by introducing sockets for networking. Plan 9 finally introduced networking using the right abstractions, but was too late.&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_7&amp;diff=18515</id>
		<title>DistOS 2014W Lecture 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_7&amp;diff=18515"/>
		<updated>2014-01-28T15:42:31Z</updated>

		<summary type="html">&lt;p&gt;Eapache: /* Unix and Plan 9 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project ==&lt;br /&gt;
&lt;br /&gt;
We discussed moving the proposal due date back a week. We also discussed spending the class prior to that date discussing the primary papers people had chosen in order to provide preliminary feedback. Anil spent some time going through the papers from OSDI12 and discussing which ones would make good projects and why.&lt;br /&gt;
&lt;br /&gt;
* Pick a primary paper.&lt;br /&gt;
* Find papers that cite that paper, papers it cites, etc. to collect a body of related work.&lt;br /&gt;
* Don&#039;t just give a history, tell a story!&lt;br /&gt;
&lt;br /&gt;
== Unix and Plan 9 ==&lt;br /&gt;
&lt;br /&gt;
UNIX was built as &amp;quot;a castrated version of Multix&amp;quot;, which was a very complex system. Multix was, arguably, so far ahead of its time that we are only just achieving their ambitions now. Unix was much more modest, and therefore much more achievable and successful. Just enough infrastructure to avoid reinventing the wheel. Just a couple of programmers making something for their own use. Unix was not designed as product or commercial entity at all. It was licensed out because AT&amp;amp;T was under severe antitrust scrutiny at the time.&lt;br /&gt;
&lt;br /&gt;
They wanted few, simple abstractions so they made everything a file.&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_7&amp;diff=18514</id>
		<title>DistOS 2014W Lecture 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_7&amp;diff=18514"/>
		<updated>2014-01-28T15:30:50Z</updated>

		<summary type="html">&lt;p&gt;Eapache: /* Project */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project ==&lt;br /&gt;
&lt;br /&gt;
We discussed moving the proposal due date back a week. We also discussed spending the class prior to that date discussing the primary papers people had chosen in order to provide preliminary feedback. Anil spent some time going through the papers from OSDI12 and discussing which ones would make good projects and why.&lt;br /&gt;
&lt;br /&gt;
* Pick a primary paper.&lt;br /&gt;
* Find papers that cite that paper, papers it cites, etc. to collect a body of related work.&lt;br /&gt;
* Don&#039;t just give a history, tell a story!&lt;br /&gt;
&lt;br /&gt;
== Unix and Plan 9 ==&lt;br /&gt;
&lt;br /&gt;
...&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_7&amp;diff=18513</id>
		<title>DistOS 2014W Lecture 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_7&amp;diff=18513"/>
		<updated>2014-01-28T15:30:20Z</updated>

		<summary type="html">&lt;p&gt;Eapache: /* Project */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project ==&lt;br /&gt;
&lt;br /&gt;
We discussed moving the proposal due date back a week. We also discussed spending the class prior to that date discussing the primary papers people had chosen in order to provide preliminary feedback. Anil spent some time going through the papers from OSDI12 and discussing which ones would make good projects and why.&lt;br /&gt;
&lt;br /&gt;
 * Pick a primary paper.&lt;br /&gt;
 * Find papers that cite that paper, papers it cites, etc. to collect a body of related work.&lt;br /&gt;
 * Don&#039;t just give a history, tell a story!&lt;br /&gt;
&lt;br /&gt;
== Unix and Plan 9 ==&lt;br /&gt;
&lt;br /&gt;
...&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_7&amp;diff=18512</id>
		<title>DistOS 2014W Lecture 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_7&amp;diff=18512"/>
		<updated>2014-01-28T15:26:28Z</updated>

		<summary type="html">&lt;p&gt;Eapache: /* Project */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project ==&lt;br /&gt;
&lt;br /&gt;
We discussed moving the proposal due date back a week. We also discussed spending the class prior to that date discussing the primary papers people had chosen in order to provide preliminary feedback. Anil spent some time going through the papers from OSDI12 and discussing which ones would make good projects and why.&lt;br /&gt;
&lt;br /&gt;
== Unix and Plan 9 ==&lt;br /&gt;
&lt;br /&gt;
...&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_7&amp;diff=18511</id>
		<title>DistOS 2014W Lecture 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_7&amp;diff=18511"/>
		<updated>2014-01-28T15:23:26Z</updated>

		<summary type="html">&lt;p&gt;Eapache: Created page with &amp;quot;== Project ==  We discussed moving the proposal due date back a week. We also discussed spending the class prior to that date discussing the primary papers people had chosen i...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project ==&lt;br /&gt;
&lt;br /&gt;
We discussed moving the proposal due date back a week. We also discussed spending the class prior to that date discussing the primary papers people had chosen in order to provide preliminary feedback.&lt;br /&gt;
&lt;br /&gt;
== Unix and Plan 9 ==&lt;br /&gt;
&lt;br /&gt;
...&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_2&amp;diff=18447</id>
		<title>DistOS 2014W Lecture 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_2&amp;diff=18447"/>
		<updated>2014-01-19T20:01:32Z</updated>

		<summary type="html">&lt;p&gt;Eapache: Created page with &amp;quot;(Not sure who originally volunteered to add this lecture, but they haven&amp;#039;t put it up so I&amp;#039;m uploading my incomplete notes. Hopefully somebody will be able to fill it in with m...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;(Not sure who originally volunteered to add this lecture, but they haven&#039;t put it up so I&#039;m uploading my incomplete notes. Hopefully somebody will be able to fill it in with more detail.)&lt;br /&gt;
&lt;br /&gt;
We now have a working definition of a Distributed OS, so we look a little closer at the underlying network. The internet (and thus the vast majority of distributed OS work today) occurs over the [https://en.wikipedia.org/wiki/TCP_IP| TCP and IP protocols].&lt;br /&gt;
&lt;br /&gt;
Anil observed that the Dist. OS abstractions which succeed are ones that don&#039;t hide the network. For example, the remote procedure call (RPC) style abstractions have generally failed because they try to hide the untrusted nature of the network. The result has been a hodge-podge of firewall software which is primarily for blocking RPC-based protocols like SMB, NFS, etc. REST, on the other hand, has succeeded on the open web because it doesn&#039;t &amp;quot;hide the network&amp;quot; in this way.&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18404</id>
		<title>DistOS 2014W Lecture 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18404"/>
		<updated>2014-01-16T16:02:41Z</updated>

		<summary type="html">&lt;p&gt;Eapache: /* CPU, Memory, Disk */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Discussions on the Alto&lt;br /&gt;
&lt;br /&gt;
==CPU, Memory, Disk==&lt;br /&gt;
&lt;br /&gt;
====CPU====&lt;br /&gt;
&lt;br /&gt;
The general hardware architecture of the CPU was biased towards the user, meaning that a greater focus was put on IO capabilities and less focus was put on computational power (arithmetic etc). There were two levels of task-switching; the CPU provided sixteen fixed-priority tasks with hardware interrupts, each of which was permanently assigned to a piece of hardware. Only one of these tasks (the lowest-priority) was dedicated to the user. This task actually ran a virtualized BCPL machine (a C-like language); the user had no access at all to the underlying microcode. Other languages could be emulated as well.&lt;br /&gt;
&lt;br /&gt;
====Memory====&lt;br /&gt;
&lt;br /&gt;
The Alto started with 64K of 16-bit words of memory and eventually grew to 256K words. However, the higher memory was not accessible except through special tricks, similar to the way that memory above 4GB is not accessible today on 32-bit systems without special tricks.&lt;br /&gt;
&lt;br /&gt;
====Task Switching====&lt;br /&gt;
&lt;br /&gt;
One thing that was confusing was that they refer to tasks both as the 16 fixed hardware tasks and the many software tasks that could be multiplexed onto the lowest-priority of those hardware tasks. In either case, task switching was cooperative; until a task gave up control by running a specific instruction, no other task could run. From a modern perspective this looks like a major security problem, since malicious software could simply never relinquish the CPU. However, the fact that hardware was first-class in this sense (with full access to the CPU and memory) made the hardware simpler because much of the complexity could be done in software. Perhaps the first hints of what we now think of as drivers?&lt;br /&gt;
&lt;br /&gt;
====Disk and Filesystem====&lt;br /&gt;
&lt;br /&gt;
To make use of disk controller read,write,truncate,delete and etc. commands were made available.To reduce the risk of global damage structural information was saved to label in each page.hints mechanism was also a available using directory get where file resides in disk.file integrity a was check using seal bit and label.&lt;br /&gt;
&lt;br /&gt;
==Ethernet, Networking protocols==&lt;br /&gt;
&lt;br /&gt;
==Graphics, Mouse, Printing==&lt;br /&gt;
&lt;br /&gt;
===Graphics===&lt;br /&gt;
&lt;br /&gt;
A lot of time was spent on what paper and ink provides us in a display sense, constantly referencing an 8.5 by 11 piece of paper as the type of display they were striving for. This showed what they were attempting to emulate in the Alto&#039;s display. The authors proposed 500 - 1000 black or white bits per inch of display (i.e. 500 - 100 dpi). However, they were unable to pursue this goal, instead settling for 70 dpi for the display, allowing them to show things such as 10 pt text. They state that a 30 Hz refresh rate was found to not be objectionable. Interestingly, however, we would find this objectionable today--most likely from being spoiled with the sheer speed of computers today, whereas the authors were used to slower performance. The Alto&#039;s display took up &#039;&#039;&#039;half&#039;&#039;&#039; the Alto&#039;s memory, a choice we found very interesting. &lt;br /&gt;
&lt;br /&gt;
Another interesting point was that the authors state that they thought it was beneficial that they could access display memory directly rather than using conventional frame buffer organizations. While we are unsure of what they meant by traditional frame buffer organizations, it is interesting to note that frame buffer organizations is what we use today for our displays.&lt;br /&gt;
&lt;br /&gt;
===Mouse===&lt;br /&gt;
&lt;br /&gt;
The mouse outlined in the paper was 200 dpi (vs. a standard mouse from Apple which is 1300 dpi) and had three buttons (one of the standard configurations of mice that are produced today). They were already using different mouse cursors (i.e., the pointer image of the cursor on screen). The real interesting point here is that the design outlined in the paper was so similar to designs we still use today. The only real divergence was the use of optical mice, although the introduction of optical mice did not altogether halt the use of non-optical mice. Today, we just have more flexibility with regards to how we design mice (e.g., having a scroll wheel, more buttons, etc.).&lt;br /&gt;
&lt;br /&gt;
===Printer===&lt;br /&gt;
&lt;br /&gt;
They state that the printer should print, in one second, an 8.5 by 11 inch page defined with 350 dots/inch (roughly 4000 horizontal scan lines of 3000 dots each). Ironically enough, this is not even what they had wanted for the actual Alto display. However, they did not have enough memory to do this and had to work around this by using things such as an incremental algorithm and reducing the number of scan lines. We were disappointed that they did not actually discuss the hardware implementation of the printer, only the software controller. However, it is interesting that the fact they are dividing the memory requirements of the printer between the hardware itself and the computer was quite a modern idea at the time, and still is.&lt;br /&gt;
&lt;br /&gt;
===Other Interesting Notes===&lt;br /&gt;
&lt;br /&gt;
We found it interesting that peripheral devices were included at all.&lt;br /&gt;
&lt;br /&gt;
The author makes a passing mention to having a tablet to draw on. However, he stated that no one really liked having the tablet as it got in the way of the keyboard.&lt;br /&gt;
&lt;br /&gt;
The recurring theme of lack of memory to implement what they had originally envisioned.&lt;br /&gt;
&lt;br /&gt;
==Applications, Programming Environment==&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18400</id>
		<title>DistOS 2014W Lecture 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18400"/>
		<updated>2014-01-16T15:56:36Z</updated>

		<summary type="html">&lt;p&gt;Eapache: /* CPU, Memory, Disk */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Discussions on the Alto&lt;br /&gt;
&lt;br /&gt;
==CPU, Memory, Disk==&lt;br /&gt;
&lt;br /&gt;
====CPU====&lt;br /&gt;
&lt;br /&gt;
The general hardware architecture of the CPU was biased towards the user, meaning that a greater focus was put on IO capabilities and less focus was put on computational power (arithmetic etc). There were two levels of task-switching; the CPU provided sixteen fixed-priority tasks with hardware interrupts, each of which was permanently assigned to a piece of hardware. Only one of these tasks (the lowest-priority) was dedicated to the user. This task actually ran a virtualized BCPL machine (a C-like language); the user had no access at all to the underlying microcode. Other languages could be emulated as well.&lt;br /&gt;
&lt;br /&gt;
====Memory====&lt;br /&gt;
&lt;br /&gt;
The Alto started with 64K of 16-bit words of memory and eventually grew to 256K words. However, the higher memory was not accessible except through special tricks, similar to the way that memory above 4GB is not accessible today on 32-bit systems without special tricks.&lt;br /&gt;
&lt;br /&gt;
====Task Switching====&lt;br /&gt;
&lt;br /&gt;
One thing that was confusing was that they refer to tasks both as the 16 fixed hardware tasks and the many software tasks that could be multiplexed onto the lowest-priority of those hardware tasks. In either case, task switching was cooperative; until a task gave up control by running a specific instruction, no other task could run. From a modern perspective this looks like a major security problem, since malicious software could simply never relinquish the CPU. However, the fact that hardware was first-class in this sense (with full access to the CPU and memory) made the hardware simpler because much of the complexity could be done in software. Perhaps the first hints of what we now think of as drivers?&lt;br /&gt;
&lt;br /&gt;
====Disk and Filesystem====&lt;br /&gt;
&lt;br /&gt;
==Ethernet, Networking protocols==&lt;br /&gt;
&lt;br /&gt;
==Graphics, Mouse, Printing==&lt;br /&gt;
&lt;br /&gt;
===Graphics===&lt;br /&gt;
&lt;br /&gt;
A lot of time was spent on what paper and ink provides us in a display sense, constantly referencing an 8.5 by 11 piece of paper as the type of display they were striving for. This showed what they were attempting to emulate in the Alto&#039;s display. The authors proposed 500 - 1000 black or white bits per inch of display (i.e. 500 - 100 dpi). However, they were unable to pursue this goal, instead settling for 70 dpi for the display, allowing them to show things such as 10 pt text. They state that a 30 Hz refresh rate was found to not be objectionable. Interestingly, however, we would find this objectionable today--most likely from being spoiled with the sheer speed of computers today, whereas the authors were used to slower performance. The Alto&#039;s display took up &#039;&#039;&#039;half&#039;&#039;&#039; the Alto&#039;s memory, a choice we found very interesting. &lt;br /&gt;
&lt;br /&gt;
Another interesting point was that the authors state that they thought it was beneficial that they could access display memory directly rather than using conventional frame buffer organizations. While we are unsure of what they meant by traditional frame buffer organizations, it is interesting to note that frame buffer organizations is what we use today for our displays.&lt;br /&gt;
&lt;br /&gt;
===Mouse===&lt;br /&gt;
&lt;br /&gt;
The mouse outlined in the paper was 200 dpi (vs. a standard mouse from Apple which is 1300 dpi) and had three buttons (one of the standard configurations of mice that are produced today). They were already using different mouse cursors (i.e., the pointer image of the cursor on screen). The real interesting point here is that the design outlined in the paper was so similar to designs we still use today. The only real divergence was the use of optical mice, although the introduction of optical mice did not altogether halt the use of non-optical mice. Today, we just have more flexibility with regards to how we design mice (e.g., having a scroll wheel, more buttons, etc.).&lt;br /&gt;
&lt;br /&gt;
===Printer===&lt;br /&gt;
&lt;br /&gt;
===Other Interesting Notes===&lt;br /&gt;
&lt;br /&gt;
We found it interesting that peripheral devices were included at all.&lt;br /&gt;
&lt;br /&gt;
The author makes a passing mention to having a tablet to draw on. However, he stated that no one really liked having the tablet as it got in the way of the keyboard.&lt;br /&gt;
&lt;br /&gt;
==Applications, Programming Environment==&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18399</id>
		<title>DistOS 2014W Lecture 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18399"/>
		<updated>2014-01-16T15:54:53Z</updated>

		<summary type="html">&lt;p&gt;Eapache: /* CPU, Memory, Disk */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Discussions on the Alto&lt;br /&gt;
&lt;br /&gt;
==CPU, Memory, Disk==&lt;br /&gt;
&lt;br /&gt;
The general hardware architecture of the CPU was biased towards the user, meaning that a greater focus was put on IO capabilities and less focus was put on computational power (arithmetic etc). There were two levels of task-switching; the CPU provided sixteen fixed-priority tasks with hardware interrupts, each of which was permanently assigned to a piece of hardware. Only one of these tasks (the lowest-priority) was dedicated to the user. This task actually ran a virtualized BCPL machine (a C-like language); the user had no access at all to the underlying microcode. Other languages could be emulated as well.&lt;br /&gt;
&lt;br /&gt;
The Alto started with 64K of 16-bit words of memory and eventually grew to 256K words. However, the higher memory was not accessible except through special tricks, similar to the way that memory above 4GB is not accessible today on 32-bit systems without special tricks.&lt;br /&gt;
&lt;br /&gt;
One thing that was confusing was that they refer to tasks both as the 16 fixed hardware tasks and the many software tasks that could be multiplexed onto the lowest-priority of those hardware tasks. In either case, task switching was cooperative; until a task gave up control by running a specific instruction, no other task could run. From a modern perspective this looks like a major security problem, since malicious software could simply never relinquish the CPU. However, the fact that hardware was first-class in this sense (with full access to the CPU and memory) made the hardware simpler because much of the complexity could be done in software. Perhaps the first hints of what we now think of as drivers?&lt;br /&gt;
&lt;br /&gt;
==Ethernet, Networking protocols==&lt;br /&gt;
&lt;br /&gt;
==Graphics, Mouse, Printing==&lt;br /&gt;
&lt;br /&gt;
===Graphics===&lt;br /&gt;
&lt;br /&gt;
A lot of time was spent on what paper and ink provides us in a display sense, constantly referencing an 8.5 by 11 piece of paper as the type of display they were striving for. This showed what they were attempting to emulate in the Alto&#039;s display. The authors proposed 500 - 1000 black or white bits per inch of display (i.e. 500 - 100 dpi). However, they were unable to pursue this goal, instead settling for 70 dpi for the display, allowing them to show things such as 10 pt text. They state that a 30 Hz refresh rate was found to not be objectionable. Interestingly, however, we would find this objectionable today--most likely from being spoiled with the sheer speed of computers today, whereas the authors were used to slower performance. The Alto&#039;s display took up &#039;&#039;&#039;half&#039;&#039;&#039; the Alto&#039;s memory, a choice we found very interesting. &lt;br /&gt;
&lt;br /&gt;
Another interesting point was that the authors state that they thought it was beneficial that they could access display memory directly rather than using conventional frame buffer organizations. While we are unsure of what they meant by traditional frame buffer organizations, it is interesting to note that frame buffer organizations is what we use today for our displays.&lt;br /&gt;
&lt;br /&gt;
===Mouse===&lt;br /&gt;
&lt;br /&gt;
The mouse outlined in the paper was 200 dpi (vs. a standard mouse from Apple which is 1300 dpi) and had three buttons (one of the standard configurations of mice that are produced today). They were already using different mouse cursors (i.e., the pointer image of the cursor on screen). The real interesting point here is that the design outlined in the paper was so similar to designs we still use today. The only real divergence was the use of optical mice, although the introduction of optical mice did not altogether halt the use of non-optical mice. Today, we just have more flexibility with regards to how we design mice (e.g., having a scroll wheel, more buttons, etc.).&lt;br /&gt;
&lt;br /&gt;
===Printer===&lt;br /&gt;
&lt;br /&gt;
===Other Interesting Notes===&lt;br /&gt;
&lt;br /&gt;
We found it interesting that peripheral devices were included at all.&lt;br /&gt;
&lt;br /&gt;
The author makes a passing mention to having a tablet to draw on. However, he stated that no one really liked having the tablet as it got in the way of the keyboard.&lt;br /&gt;
&lt;br /&gt;
==Applications, Programming Environment==&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18395</id>
		<title>DistOS 2014W Lecture 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_4&amp;diff=18395"/>
		<updated>2014-01-16T15:45:50Z</updated>

		<summary type="html">&lt;p&gt;Eapache: /* CPU, Memory, Disk */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Discussions on the Alto&lt;br /&gt;
&lt;br /&gt;
==CPU, Memory, Disk==&lt;br /&gt;
&lt;br /&gt;
The general hardware architecture of the CPU was biased towards the user, meaning that a greater focus was put on IO capabilities and less focus was put on computational power (arithmetic etc). There were two levels of task-switching; the CPU provided sixteen fixed-priority tasks with hardware interrupts, each of which was permanently assigned to a piece of hardware. Only one of these tasks (the lowest-priority) was dedicated to the user. This task actually ran a virtualized BCPL machine (a C-like language); the user had no access at all to the underlying microcode. Other languages could be emulated as well.&lt;br /&gt;
&lt;br /&gt;
The Alto started with 64K of 16-bit words of memory and eventually grew to 256K words. However, the higher memory was not accessible except through special tricks, similar to the way that memory above 4GB is not accessible today on 32-bit systems without special tricks.&lt;br /&gt;
&lt;br /&gt;
==Ethernet, Networking protocols==&lt;br /&gt;
&lt;br /&gt;
==Graphics, Mouse, Printing==&lt;br /&gt;
&lt;br /&gt;
==Applications, Programming Environment==&lt;/div&gt;</summary>
		<author><name>Eapache</name></author>
	</entry>
</feed>