<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Abeinges</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Abeinges"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Abeinges"/>
	<updated>2026-04-22T10:09:04Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20153</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20153"/>
		<updated>2015-04-06T05:13:01Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* Spanner */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
* Google System used for storing data of various Google Products, for instance Google Analytics, Google Finance, Orkut, Personalized Search, Writely, Google Earth and many more&lt;br /&gt;
* Big table is &lt;br /&gt;
** Sparse&lt;br /&gt;
** Persistant&lt;br /&gt;
** Muti dimensional Sorted Map&lt;br /&gt;
*It is indexed by&lt;br /&gt;
** Row Key: Every read or write of data under single row key is atomic. Each row range is called Tablet. Select Row key to get good locality for data access.&lt;br /&gt;
** Column Key: Grouped into sets called Column Families. Forms basic unit of Access Control.All data stored is of same type.Syntax used: &#039;&#039;family:qualifier&#039;&#039;&lt;br /&gt;
** Time Stamp:Each cell consists of multiple versions of same data which are indexed by Timestamps.In order to avoid collisions, Timestamps need to be generated by applications.&lt;br /&gt;
* Big Table &#039;&#039;&#039;API&#039;&#039;&#039;: Provides functions for&lt;br /&gt;
** Creating and Deleting&lt;br /&gt;
*** Tables&lt;br /&gt;
*** Column Families&lt;br /&gt;
**Changing Cluster&lt;br /&gt;
**Changing Table&lt;br /&gt;
**Column Family metadata like Access Control Rights.&lt;br /&gt;
** Set of wrappers which allow Big Data to be used both as&lt;br /&gt;
*** Input source&lt;br /&gt;
***Output Target&lt;br /&gt;
*The timestamp mechanism in BIG table helps clients to access recent versions of data with simple accessing aspects of using row and column.&lt;br /&gt;
*Parallel computation and cluster management system makes BIG table flexible and highly scalable.&lt;br /&gt;
&lt;br /&gt;
== Dynamo==&lt;br /&gt;
* Amazon&#039;s Key Value Store&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
*Sacrifices consistency under certain failure scenarios.&lt;br /&gt;
*Treats failure handling as normal case without impact on availability and performance.&lt;br /&gt;
*Data is partitioned and replicated using consistent hashing and consistency is facilitated by use of object versioning.&lt;br /&gt;
* This system has certain requirements such as: &lt;br /&gt;
** Query Model: Simple read and write operations to data item that are uniquely identified by a key.&lt;br /&gt;
**ACID properties: Atomicity, Consistency, Isolation, Durability.&lt;br /&gt;
**Efficiency: System needs to function on a commodity hardware infrastructure.&lt;br /&gt;
*  Service Level Agreements(SLA): They are a negotiated contract between a client and a service regarding characteristics related to systems. They are used in order to guarantee that in a bounded time period, an application can deliver it&#039;s functionality.&lt;br /&gt;
* System Architecture: It consists of &#039;&#039;System Interface&#039;&#039;, &#039;&#039;Partitioning Algorithm&#039;&#039;, &#039;&#039;Replication&#039;&#039;,&#039;&#039;Data Versioning&#039;&#039;.&lt;br /&gt;
* Successfully handles&lt;br /&gt;
** Server Failure&lt;br /&gt;
** Data Centre Failure&lt;br /&gt;
** Network Partitions&lt;br /&gt;
* Allows service owners to customize their own storage systems according to their storage systems to meet the desired performance, durability and consistency SLAs.&lt;br /&gt;
* Building block for highly available applications.&lt;br /&gt;
&lt;br /&gt;
==Cassandra==&lt;br /&gt;
* Facebook&#039;s storage system to fulfil needs of the Inbox Search Problem&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
*Distributed multi dimensional map indexed by a key&lt;br /&gt;
* In it&#039;s data model:&lt;br /&gt;
** Columns grouped together into sets called column families. Column Families further of 2 types:&lt;br /&gt;
***Simple column families&lt;br /&gt;
***Super column families&lt;br /&gt;
* API consists of :&lt;br /&gt;
** Insert&lt;br /&gt;
**Get&lt;br /&gt;
** Delete&lt;br /&gt;
* System Architecture consists of :&lt;br /&gt;
** Partitioning: Takes place using consistent hashing&lt;br /&gt;
**Replication: Each item replicated at n hosts where &amp;quot;n&amp;quot; is the replication factor configured per system. &lt;br /&gt;
** Membership: Cluster membership is based on Scuttle butt which is a highly efficient anti-entropy Gossip based mechanism.The Membership further has sub part such as:&lt;br /&gt;
***Failure Detection&lt;br /&gt;
**Bootstrapping&lt;br /&gt;
** Scaling the cluster&lt;br /&gt;
*It can run cheap commodity hardware and handle high throughput &lt;br /&gt;
*Its multiple usable structure makes it very scalable&lt;br /&gt;
&lt;br /&gt;
=Spanner=&lt;br /&gt;
* Google&#039;s scalable, multi version, globally distributed database.&lt;br /&gt;
* Has been built on top of the Google&#039;s Big table.&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface.&lt;br /&gt;
* Uses a separate high-reliability time service to guarantee the correctness properties around concurrency control.&lt;br /&gt;
** The timestamps are utilized.&lt;br /&gt;
*It shares data across machines and migrates data automatically across machines&lt;br /&gt;
*Data Control Functions in spanner controls latency and performance&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=20152</id>
		<title>DistOS 2015W Session 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=20152"/>
		<updated>2015-04-06T05:11:13Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* Naiad */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== BOINC ==&lt;br /&gt;
&lt;br /&gt;
*Public Resource Computing Platform&lt;br /&gt;
*Gives scientists the ability to use large amounts of computation resources.&lt;br /&gt;
*The clients do not connect directly with each other but instead they talk to a central server located at Berkley&lt;br /&gt;
*The goals of Boinc are: &lt;br /&gt;
:*1) reduce the barriers of entry&lt;br /&gt;
:*2) Share resources among autonomous projects&lt;br /&gt;
:*3) Support diverse applications&lt;br /&gt;
:*4) Reward participants.&lt;br /&gt;
&lt;br /&gt;
*It can run as applications in common language with no modifications&lt;br /&gt;
 A BOINC application can be identified by a single master URL, &amp;lt;br/&amp;gt;which serves as the homepage as well as the directory of the servers.&lt;br /&gt;
&lt;br /&gt;
== SETI@Home ==&lt;br /&gt;
&lt;br /&gt;
*Uses public resource computing to analyze radio signals to find extraterrestrial intelligence&lt;br /&gt;
*Need good quality telescope to search for radio signals, and lots of computational power, which was unavailable locally&lt;br /&gt;
*It has not yet found extraterrestrial intelligence, but its has established credibility of public resource computing projects&lt;br /&gt;
*Originally custom, now uses BOINC as a backbone for the project&lt;br /&gt;
*Uses relational database to store information on a large scale, further it uses a multi-threaded server to distribute work to clients&lt;br /&gt;
*Quality of data in this architecture is untrustworthy, the main incentive to use it, however, is that it is a cheap and easy way of scaling the work exponentially.&lt;br /&gt;
*Provided social incentives to encourage users to join the system.&lt;br /&gt;
*This computation model still exists but not in the legitimate world.&lt;br /&gt;
*Formed a good concept of public resource computing and a distributed computing by providing a platform independent framework&lt;br /&gt;
&lt;br /&gt;
== MapReduce ==&lt;br /&gt;
&lt;br /&gt;
*A programming model presented by Google to do large scale parallel computations&lt;br /&gt;
*Uses the &amp;lt;code&amp;gt;Map()&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Reduce()&amp;lt;/code&amp;gt; functions from functional style programming languages&lt;br /&gt;
:*Map (Filtering)&lt;br /&gt;
::*Takes a function and applies it to a bunch of keys to produce values&lt;br /&gt;
* Hides parallelization, fault tolerance, locality optimization and load balancing&lt;br /&gt;
:*Reduce (Summary)&lt;br /&gt;
::*Accumulates results from the data set using a given function&lt;br /&gt;
* Very easy to use and understand, with many classic problems fitting this pattern&lt;br /&gt;
* Otherwise quite constrained in what exactly can be done&lt;br /&gt;
* Uses hashing to distribute similar keys to similar machines, but otherwise spread the load&lt;br /&gt;
&lt;br /&gt;
== Naiad ==&lt;br /&gt;
&lt;br /&gt;
*A programming model similar to &amp;lt;code&amp;gt;MapReduce&amp;lt;/code&amp;gt; but with streaming capabilities so that data results are almost instantaneous&lt;br /&gt;
*A distributed system for executing data parallel cyclic dataflow programs offering high throughput and low latency&lt;br /&gt;
*Aims to provide a general purpose system which will fulfill the requirements and the will also support wide variety of high level programming models.&lt;br /&gt;
*Highly used for parallel execution of data&lt;br /&gt;
*Provides the functionality of checkpoint and restoring&lt;br /&gt;
*A complex framework that can be the backend for simpler models of computation like LINQ or MapReduce to be built on top of.&lt;br /&gt;
*Real Time Applications:&lt;br /&gt;
:*Batch iterative Machine Learning: &lt;br /&gt;
VW, an open source distributed machine learning performs iteration in 3 phases: each process updates local state; processes independently training on local data; and process jointly performed global average which is All Reduce.&lt;br /&gt;
:*Streaming Acyclic Computation&lt;br /&gt;
When compared to a system called [http://research.microsoft.com/apps/pubs/default.aspx?id=163832 Kineograph] ( also done by Microsoft ), which processes twitter handles and provides counts of the occurrence of hashtags as well as links between popular tags, was written using Naiad in 26 lines of code and ran close to 2X faster.&lt;br /&gt;
* Naiad paper won the best paper award in SOSP 2013, check-out this link in Microsoft Research website http://research.microsoft.com/en-us/projects/naiad/ . Down in this page you can see some videos that explains naiad including Derek&#039;s Murray presentation at SOSP 2013.&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=20151</id>
		<title>DistOS 2015W Session 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=20151"/>
		<updated>2015-04-06T05:09:34Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* MapReduce */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== BOINC ==&lt;br /&gt;
&lt;br /&gt;
*Public Resource Computing Platform&lt;br /&gt;
*Gives scientists the ability to use large amounts of computation resources.&lt;br /&gt;
*The clients do not connect directly with each other but instead they talk to a central server located at Berkley&lt;br /&gt;
*The goals of Boinc are: &lt;br /&gt;
:*1) reduce the barriers of entry&lt;br /&gt;
:*2) Share resources among autonomous projects&lt;br /&gt;
:*3) Support diverse applications&lt;br /&gt;
:*4) Reward participants.&lt;br /&gt;
&lt;br /&gt;
*It can run as applications in common language with no modifications&lt;br /&gt;
 A BOINC application can be identified by a single master URL, &amp;lt;br/&amp;gt;which serves as the homepage as well as the directory of the servers.&lt;br /&gt;
&lt;br /&gt;
== SETI@Home ==&lt;br /&gt;
&lt;br /&gt;
*Uses public resource computing to analyze radio signals to find extraterrestrial intelligence&lt;br /&gt;
*Need good quality telescope to search for radio signals, and lots of computational power, which was unavailable locally&lt;br /&gt;
*It has not yet found extraterrestrial intelligence, but its has established credibility of public resource computing projects&lt;br /&gt;
*Originally custom, now uses BOINC as a backbone for the project&lt;br /&gt;
*Uses relational database to store information on a large scale, further it uses a multi-threaded server to distribute work to clients&lt;br /&gt;
*Quality of data in this architecture is untrustworthy, the main incentive to use it, however, is that it is a cheap and easy way of scaling the work exponentially.&lt;br /&gt;
*Provided social incentives to encourage users to join the system.&lt;br /&gt;
*This computation model still exists but not in the legitimate world.&lt;br /&gt;
*Formed a good concept of public resource computing and a distributed computing by providing a platform independent framework&lt;br /&gt;
&lt;br /&gt;
== MapReduce ==&lt;br /&gt;
&lt;br /&gt;
*A programming model presented by Google to do large scale parallel computations&lt;br /&gt;
*Uses the &amp;lt;code&amp;gt;Map()&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Reduce()&amp;lt;/code&amp;gt; functions from functional style programming languages&lt;br /&gt;
:*Map (Filtering)&lt;br /&gt;
::*Takes a function and applies it to a bunch of keys to produce values&lt;br /&gt;
* Hides parallelization, fault tolerance, locality optimization and load balancing&lt;br /&gt;
:*Reduce (Summary)&lt;br /&gt;
::*Accumulates results from the data set using a given function&lt;br /&gt;
* Very easy to use and understand, with many classic problems fitting this pattern&lt;br /&gt;
* Otherwise quite constrained in what exactly can be done&lt;br /&gt;
* Uses hashing to distribute similar keys to similar machines, but otherwise spread the load&lt;br /&gt;
&lt;br /&gt;
== Naiad ==&lt;br /&gt;
&lt;br /&gt;
*A programming model similar to &amp;lt;code&amp;gt;MapReduce&amp;lt;/code&amp;gt; but with streaming capabilities so that data results are almost instantaneous&lt;br /&gt;
*A distributed system for executing data parallel cyclic dataflow programs offering high throughput and low latency&lt;br /&gt;
*Aims to provide a general purpose system which will fulfill the requirements and the will also support wide variety of high level programming models.&lt;br /&gt;
*Highly used for parallel execution of data&lt;br /&gt;
*Provides the functionality of checkpoint and restoring&lt;br /&gt;
*Real Time Applications:&lt;br /&gt;
:*Batch iterative Machine Learning: &lt;br /&gt;
VW, an open source distributed machine learning performs iteration in 3 phases: each process updates local state; processes independently training on local data; and process jointly performed global average which is All Reduce.&lt;br /&gt;
:*Streaming Acyclic Computation&lt;br /&gt;
When compared to a system called [http://research.microsoft.com/apps/pubs/default.aspx?id=163832 Kineograph] ( also done by Microsoft ), which processes twitter handles and provides counts of the occurrence of hashtags as well as links between popular tags, was written using Naiad in 26 lines of code and ran close to 2X faster.&lt;br /&gt;
* Naiad paper won the best paper award in SOSP 2013, check-out this link in Microsoft Research website http://research.microsoft.com/en-us/projects/naiad/ . Down in this page you can see some videos that explains naiad including Derek&#039;s Murray presentation at SOSP 2013.&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=20150</id>
		<title>DistOS 2015W Session 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=20150"/>
		<updated>2015-04-06T05:06:09Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* SETI@Home */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== BOINC ==&lt;br /&gt;
&lt;br /&gt;
*Public Resource Computing Platform&lt;br /&gt;
*Gives scientists the ability to use large amounts of computation resources.&lt;br /&gt;
*The clients do not connect directly with each other but instead they talk to a central server located at Berkley&lt;br /&gt;
*The goals of Boinc are: &lt;br /&gt;
:*1) reduce the barriers of entry&lt;br /&gt;
:*2) Share resources among autonomous projects&lt;br /&gt;
:*3) Support diverse applications&lt;br /&gt;
:*4) Reward participants.&lt;br /&gt;
&lt;br /&gt;
*It can run as applications in common language with no modifications&lt;br /&gt;
 A BOINC application can be identified by a single master URL, &amp;lt;br/&amp;gt;which serves as the homepage as well as the directory of the servers.&lt;br /&gt;
&lt;br /&gt;
== SETI@Home ==&lt;br /&gt;
&lt;br /&gt;
*Uses public resource computing to analyze radio signals to find extraterrestrial intelligence&lt;br /&gt;
*Need good quality telescope to search for radio signals, and lots of computational power, which was unavailable locally&lt;br /&gt;
*It has not yet found extraterrestrial intelligence, but its has established credibility of public resource computing projects&lt;br /&gt;
*Originally custom, now uses BOINC as a backbone for the project&lt;br /&gt;
*Uses relational database to store information on a large scale, further it uses a multi-threaded server to distribute work to clients&lt;br /&gt;
*Quality of data in this architecture is untrustworthy, the main incentive to use it, however, is that it is a cheap and easy way of scaling the work exponentially.&lt;br /&gt;
*Provided social incentives to encourage users to join the system.&lt;br /&gt;
*This computation model still exists but not in the legitimate world.&lt;br /&gt;
*Formed a good concept of public resource computing and a distributed computing by providing a platform independent framework&lt;br /&gt;
&lt;br /&gt;
== MapReduce ==&lt;br /&gt;
&lt;br /&gt;
*A programming model presented by Google to do large scale parallel computations&lt;br /&gt;
*Uses the &amp;lt;code&amp;gt;Map()&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Reduce()&amp;lt;/code&amp;gt; functions from functional style programming languages&lt;br /&gt;
:*Map (Filtering)&lt;br /&gt;
::*Takes a function and applies it to all elements of the given data set&lt;br /&gt;
* Hides parallelization, fault tolerance, locality optimization and load balancing&lt;br /&gt;
:*Reduce (Summary)&lt;br /&gt;
::*Accumulates results from the data set using a given function&lt;br /&gt;
&lt;br /&gt;
== Naiad ==&lt;br /&gt;
&lt;br /&gt;
*A programming model similar to &amp;lt;code&amp;gt;MapReduce&amp;lt;/code&amp;gt; but with streaming capabilities so that data results are almost instantaneous&lt;br /&gt;
*A distributed system for executing data parallel cyclic dataflow programs offering high throughput and low latency&lt;br /&gt;
*Aims to provide a general purpose system which will fulfill the requirements and the will also support wide variety of high level programming models.&lt;br /&gt;
*Highly used for parallel execution of data&lt;br /&gt;
*Provides the functionality of checkpoint and restoring&lt;br /&gt;
*Real Time Applications:&lt;br /&gt;
:*Batch iterative Machine Learning: &lt;br /&gt;
VW, an open source distributed machine learning performs iteration in 3 phases: each process updates local state; processes independently training on local data; and process jointly performed global average which is All Reduce.&lt;br /&gt;
:*Streaming Acyclic Computation&lt;br /&gt;
When compared to a system called [http://research.microsoft.com/apps/pubs/default.aspx?id=163832 Kineograph] ( also done by Microsoft ), which processes twitter handles and provides counts of the occurrence of hashtags as well as links between popular tags, was written using Naiad in 26 lines of code and ran close to 2X faster.&lt;br /&gt;
* Naiad paper won the best paper award in SOSP 2013, check-out this link in Microsoft Research website http://research.microsoft.com/en-us/projects/naiad/ . Down in this page you can see some videos that explains naiad including Derek&#039;s Murray presentation at SOSP 2013.&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=20149</id>
		<title>DistOS 2015W Session 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=20149"/>
		<updated>2015-04-06T05:04:30Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* Issues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Ceph =&lt;br /&gt;
Unlike GFS that was talked about previously, this is a general purpose distributed file system. It follows the same general model of distribution as GFS and Amoeba.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Components ==&lt;br /&gt;
* Client&lt;br /&gt;
&lt;br /&gt;
* Cluster of Object Storage Devices (OSD)&lt;br /&gt;
** It basically stores data and metadata and clients communicate directly with it to perform IO operations&lt;br /&gt;
** Data is stored in objects (variable size chunks)&lt;br /&gt;
&lt;br /&gt;
* Meta-data Server (MDS)&lt;br /&gt;
** It is used to manage the file and directories. Clients interact with it to perform metadata operations like open, rename. It manages the capabilities of a client.&lt;br /&gt;
** Clients &#039;&#039;&#039;do not&#039;&#039;&#039; need to access MDSs to find where data is stored, improving scalability (more on that below)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
* Decoupled data and metadata&lt;br /&gt;
&lt;br /&gt;
* Dynamic Distributed Metadata Management&lt;br /&gt;
&lt;br /&gt;
** It distributes the metadata among multiple metadata servers using dynamic sub-tree partitioning, meaning folders that get used more often get their meta-data replicated to more servers, spreading the load. This happens completely automatically&lt;br /&gt;
&lt;br /&gt;
* Object based storage&lt;br /&gt;
** Uses cluster of OSDs to form a Reliable Autonomic Distributed Object-Store (RADOS) for Ceph failure detection and recovery&lt;br /&gt;
&lt;br /&gt;
* CRUSH (Controlled, Replicated, Under Scalable, Hashing)&lt;br /&gt;
** The hashing algorithm used to calculate the location of object instead of looking for them&lt;br /&gt;
** This significantly reduces the load on the MDSs because each client has enough information to independently determine where things should be located.&lt;br /&gt;
** Responsible for automatically moving data when ODSs are added or removed (can be simplified as &#039;&#039;location = CRUSH(filename) % num_servers&#039;&#039;)&lt;br /&gt;
** The CRUSH paper on Ceph’s website can be [http://ceph.com/papers/weil-crush-sc06.pdf viewed here]&lt;br /&gt;
&lt;br /&gt;
* RADOS (Reliable Autonomic Distributed Object-Store) is the object store for Ceph&lt;br /&gt;
&lt;br /&gt;
= Chubby =&lt;br /&gt;
It is a coarse grained lock service, made and used internally by Google, that serves multiple clients with small number of servers (chubby cell).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== System Components ==&lt;br /&gt;
* Chubby Cell&lt;br /&gt;
** Handles the actual locks&lt;br /&gt;
** Mainly consists of 5 servers known as replicas&lt;br /&gt;
** Consensus protocol ([https://en.wikipedia.org/wiki/Paxos_(computer_science) Paxos]) is used to elect the master from replicas&lt;br /&gt;
&lt;br /&gt;
* Client&lt;br /&gt;
** Used by programs to request and use locks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Features ==&lt;br /&gt;
* Implemented as a semi POSIX-compliant file-system with a 256KB file limit&lt;br /&gt;
** Permissions are for files only, not folders, thus breaking some compatibility&lt;br /&gt;
** Trivial to use by programs; just use the standard &#039;&#039;fopen()&#039;&#039; family of calls&lt;br /&gt;
&lt;br /&gt;
* Uses consensus algorithm (paxos) among a set of servers to agree on who is the master that is in charge of the metadata&lt;br /&gt;
&lt;br /&gt;
* Meant for locks that last hours or days, not seconds (thus, &amp;quot;coarse grained&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
* A master server can handle tens of thousands of simultaneous connections&lt;br /&gt;
** This can be further improved by using caching servers as most of the traffic are keep-alive messages&lt;br /&gt;
&lt;br /&gt;
* Used by GFS for electing master server&lt;br /&gt;
&lt;br /&gt;
* Also used by Google as a nameserver&lt;br /&gt;
&lt;br /&gt;
== Issues ==&lt;br /&gt;
* Due to the use of Paxos (the only algorithm for this problem), the chubby cell is limited to only 5 servers. While this limits fault-tolerance, in practice this is more than enough.&lt;br /&gt;
* Since it has a file-system client interface, many programmers tend to abuse the system and need education (even inside Google)&lt;br /&gt;
* Clients need to constantly ping Chubby to verify that they still exist. This is to ensure that a server disappearing while holding a lock does not indefinitely hold that lock.&lt;br /&gt;
* Clients must also consequently re-verify that they hold a lock that they think they hold because Chubby may have timed them out.&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=20148</id>
		<title>DistOS 2015W Session 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=20148"/>
		<updated>2015-04-06T05:03:56Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* Issues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Ceph =&lt;br /&gt;
Unlike GFS that was talked about previously, this is a general purpose distributed file system. It follows the same general model of distribution as GFS and Amoeba.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Components ==&lt;br /&gt;
* Client&lt;br /&gt;
&lt;br /&gt;
* Cluster of Object Storage Devices (OSD)&lt;br /&gt;
** It basically stores data and metadata and clients communicate directly with it to perform IO operations&lt;br /&gt;
** Data is stored in objects (variable size chunks)&lt;br /&gt;
&lt;br /&gt;
* Meta-data Server (MDS)&lt;br /&gt;
** It is used to manage the file and directories. Clients interact with it to perform metadata operations like open, rename. It manages the capabilities of a client.&lt;br /&gt;
** Clients &#039;&#039;&#039;do not&#039;&#039;&#039; need to access MDSs to find where data is stored, improving scalability (more on that below)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
* Decoupled data and metadata&lt;br /&gt;
&lt;br /&gt;
* Dynamic Distributed Metadata Management&lt;br /&gt;
&lt;br /&gt;
** It distributes the metadata among multiple metadata servers using dynamic sub-tree partitioning, meaning folders that get used more often get their meta-data replicated to more servers, spreading the load. This happens completely automatically&lt;br /&gt;
&lt;br /&gt;
* Object based storage&lt;br /&gt;
** Uses cluster of OSDs to form a Reliable Autonomic Distributed Object-Store (RADOS) for Ceph failure detection and recovery&lt;br /&gt;
&lt;br /&gt;
* CRUSH (Controlled, Replicated, Under Scalable, Hashing)&lt;br /&gt;
** The hashing algorithm used to calculate the location of object instead of looking for them&lt;br /&gt;
** This significantly reduces the load on the MDSs because each client has enough information to independently determine where things should be located.&lt;br /&gt;
** Responsible for automatically moving data when ODSs are added or removed (can be simplified as &#039;&#039;location = CRUSH(filename) % num_servers&#039;&#039;)&lt;br /&gt;
** The CRUSH paper on Ceph’s website can be [http://ceph.com/papers/weil-crush-sc06.pdf viewed here]&lt;br /&gt;
&lt;br /&gt;
* RADOS (Reliable Autonomic Distributed Object-Store) is the object store for Ceph&lt;br /&gt;
&lt;br /&gt;
= Chubby =&lt;br /&gt;
It is a coarse grained lock service, made and used internally by Google, that serves multiple clients with small number of servers (chubby cell).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== System Components ==&lt;br /&gt;
* Chubby Cell&lt;br /&gt;
** Handles the actual locks&lt;br /&gt;
** Mainly consists of 5 servers known as replicas&lt;br /&gt;
** Consensus protocol ([https://en.wikipedia.org/wiki/Paxos_(computer_science) Paxos]) is used to elect the master from replicas&lt;br /&gt;
&lt;br /&gt;
* Client&lt;br /&gt;
** Used by programs to request and use locks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Features ==&lt;br /&gt;
* Implemented as a semi POSIX-compliant file-system with a 256KB file limit&lt;br /&gt;
** Permissions are for files only, not folders, thus breaking some compatibility&lt;br /&gt;
** Trivial to use by programs; just use the standard &#039;&#039;fopen()&#039;&#039; family of calls&lt;br /&gt;
&lt;br /&gt;
* Uses consensus algorithm (paxos) among a set of servers to agree on who is the master that is in charge of the metadata&lt;br /&gt;
&lt;br /&gt;
* Meant for locks that last hours or days, not seconds (thus, &amp;quot;coarse grained&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
* A master server can handle tens of thousands of simultaneous connections&lt;br /&gt;
** This can be further improved by using caching servers as most of the traffic are keep-alive messages&lt;br /&gt;
&lt;br /&gt;
* Used by GFS for electing master server&lt;br /&gt;
&lt;br /&gt;
* Also used by Google as a nameserver&lt;br /&gt;
&lt;br /&gt;
== Issues ==&lt;br /&gt;
* Due to the use of Paxos (the only algorithm for this problem), the chubby cell is limited to only 5 servers. While this limits fault-tolerance, in practice this is more than enough.&lt;br /&gt;
* Since it has a file-system client interface, many programmers tend to abuse the system and need education (even inside Google)&lt;br /&gt;
* Clients need to constantly ping Chubby to verify that they still exist. This is to ensure that a server disappearing while holding a lock does not indefinitely hold that lock.&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=20147</id>
		<title>DistOS 2015W Session 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=20147"/>
		<updated>2015-04-06T05:02:07Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* Main Features */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Ceph =&lt;br /&gt;
Unlike GFS that was talked about previously, this is a general purpose distributed file system. It follows the same general model of distribution as GFS and Amoeba.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Components ==&lt;br /&gt;
* Client&lt;br /&gt;
&lt;br /&gt;
* Cluster of Object Storage Devices (OSD)&lt;br /&gt;
** It basically stores data and metadata and clients communicate directly with it to perform IO operations&lt;br /&gt;
** Data is stored in objects (variable size chunks)&lt;br /&gt;
&lt;br /&gt;
* Meta-data Server (MDS)&lt;br /&gt;
** It is used to manage the file and directories. Clients interact with it to perform metadata operations like open, rename. It manages the capabilities of a client.&lt;br /&gt;
** Clients &#039;&#039;&#039;do not&#039;&#039;&#039; need to access MDSs to find where data is stored, improving scalability (more on that below)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
* Decoupled data and metadata&lt;br /&gt;
&lt;br /&gt;
* Dynamic Distributed Metadata Management&lt;br /&gt;
&lt;br /&gt;
** It distributes the metadata among multiple metadata servers using dynamic sub-tree partitioning, meaning folders that get used more often get their meta-data replicated to more servers, spreading the load. This happens completely automatically&lt;br /&gt;
&lt;br /&gt;
* Object based storage&lt;br /&gt;
** Uses cluster of OSDs to form a Reliable Autonomic Distributed Object-Store (RADOS) for Ceph failure detection and recovery&lt;br /&gt;
&lt;br /&gt;
* CRUSH (Controlled, Replicated, Under Scalable, Hashing)&lt;br /&gt;
** The hashing algorithm used to calculate the location of object instead of looking for them&lt;br /&gt;
** This significantly reduces the load on the MDSs because each client has enough information to independently determine where things should be located.&lt;br /&gt;
** Responsible for automatically moving data when ODSs are added or removed (can be simplified as &#039;&#039;location = CRUSH(filename) % num_servers&#039;&#039;)&lt;br /&gt;
** The CRUSH paper on Ceph’s website can be [http://ceph.com/papers/weil-crush-sc06.pdf viewed here]&lt;br /&gt;
&lt;br /&gt;
* RADOS (Reliable Autonomic Distributed Object-Store) is the object store for Ceph&lt;br /&gt;
&lt;br /&gt;
= Chubby =&lt;br /&gt;
It is a coarse grained lock service, made and used internally by Google, that serves multiple clients with small number of servers (chubby cell).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== System Components ==&lt;br /&gt;
* Chubby Cell&lt;br /&gt;
** Handles the actual locks&lt;br /&gt;
** Mainly consists of 5 servers known as replicas&lt;br /&gt;
** Consensus protocol ([https://en.wikipedia.org/wiki/Paxos_(computer_science) Paxos]) is used to elect the master from replicas&lt;br /&gt;
&lt;br /&gt;
* Client&lt;br /&gt;
** Used by programs to request and use locks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Features ==&lt;br /&gt;
* Implemented as a semi POSIX-compliant file-system with a 256KB file limit&lt;br /&gt;
** Permissions are for files only, not folders, thus breaking some compatibility&lt;br /&gt;
** Trivial to use by programs; just use the standard &#039;&#039;fopen()&#039;&#039; family of calls&lt;br /&gt;
&lt;br /&gt;
* Uses consensus algorithm (paxos) among a set of servers to agree on who is the master that is in charge of the metadata&lt;br /&gt;
&lt;br /&gt;
* Meant for locks that last hours or days, not seconds (thus, &amp;quot;coarse grained&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
* A master server can handle tens of thousands of simultaneous connections&lt;br /&gt;
** This can be further improved by using caching servers as most of the traffic are keep-alive messages&lt;br /&gt;
&lt;br /&gt;
* Used by GFS for electing master server&lt;br /&gt;
&lt;br /&gt;
* Also used by Google as a nameserver&lt;br /&gt;
&lt;br /&gt;
== Issues ==&lt;br /&gt;
* Due to the use of Paxos (the only algorithm for this problem), the chubby cell is limited to only 5 servers. While this limits fault-tolerance, in practice this is more than enough.&lt;br /&gt;
* Since it has a file-system client interface, many programmers tend to abuse the system and need education (even inside Google)&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=20146</id>
		<title>DistOS 2015W Session 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=20146"/>
		<updated>2015-04-06T05:01:06Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* Ceph */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Ceph =&lt;br /&gt;
Unlike GFS that was talked about previously, this is a general purpose distributed file system. It follows the same general model of distribution as GFS and Amoeba.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Components ==&lt;br /&gt;
* Client&lt;br /&gt;
&lt;br /&gt;
* Cluster of Object Storage Devices (OSD)&lt;br /&gt;
** It basically stores data and metadata and clients communicate directly with it to perform IO operations&lt;br /&gt;
** Data is stored in objects (variable size chunks)&lt;br /&gt;
&lt;br /&gt;
* Meta-data Server (MDS)&lt;br /&gt;
** It is used to manage the file and directories. Clients interact with it to perform metadata operations like open, rename. It manages the capabilities of a client.&lt;br /&gt;
** Clients &#039;&#039;&#039;do not&#039;&#039;&#039; need to access MDSs to find where data is stored, improving scalability (more on that below)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
* Decoupled data and metadata&lt;br /&gt;
&lt;br /&gt;
* Dynamic Distributed Metadata Management&lt;br /&gt;
&lt;br /&gt;
** It distributes the metadata among multiple metadata servers using dynamic sub-tree partitioning, meaning folders that get used more often get their meta-data replicated to more servers, spreading the load. This happens completely automatically&lt;br /&gt;
&lt;br /&gt;
* Object based storage&lt;br /&gt;
** Uses cluster of OSDs to form a Reliable Autonomic Distributed Object-Store (RADOS) for Ceph failure detection and recovery&lt;br /&gt;
&lt;br /&gt;
* CRUSH (Controlled, Replicated, Under Scalable, Hashing)&lt;br /&gt;
** The hashing algorithm used to calculate the location of object instead of looking for them&lt;br /&gt;
** This significantly reduces the load on the MDSs because each client has enough information to independently determine where things should be located.&lt;br /&gt;
** Responsible for automatically moving data when ODSs are added or removed (can be simplified as &#039;&#039;location = CRUSH(filename) % num_servers&#039;&#039;)&lt;br /&gt;
** The CRUSH paper on Ceph’s website can be [http://ceph.com/papers/weil-crush-sc06.pdf viewed here]&lt;br /&gt;
&lt;br /&gt;
* RADOS (Reliable Autonomic Distributed Object-Store) is the object store for Ceph&lt;br /&gt;
&lt;br /&gt;
= Chubby =&lt;br /&gt;
It is a coarse grained lock service, made and used internally by Google, that serves multiple clients with small number of servers (chubby cell).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== System Components ==&lt;br /&gt;
* Chubby Cell&lt;br /&gt;
** Handles the actual locks&lt;br /&gt;
** Mainly consists of 5 servers known as replicas&lt;br /&gt;
** Consensus protocol ([https://en.wikipedia.org/wiki/Paxos_(computer_science) Paxos]) is used to elect the master from replicas&lt;br /&gt;
&lt;br /&gt;
* Client&lt;br /&gt;
** Used by programs to request and use locks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Features ==&lt;br /&gt;
* Implemented as a semi POSIX-compliant file-system with a 256KB file limit&lt;br /&gt;
** Permissions are for files only, not folders, thus breaking some compatibility&lt;br /&gt;
** Trivial to use by programs; just use the standard &#039;&#039;fopen()&#039;&#039; family of calls&lt;br /&gt;
&lt;br /&gt;
* Uses consensus algorithm among a set of servers to agree on who is the master that is in charge of the metadata&lt;br /&gt;
&lt;br /&gt;
* Meant for locks that last hours or days, not seconds (thus, &amp;quot;coarse grained&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
* A master server can handle tens of thousands of simultaneous connections&lt;br /&gt;
** This can be further improved by using caching servers as most of the traffic are keep-alive messages&lt;br /&gt;
&lt;br /&gt;
* Used by GFS for electing master server&lt;br /&gt;
&lt;br /&gt;
* Also used by Google as a nameserver&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Issues ==&lt;br /&gt;
* Due to the use of Paxos (the only algorithm for this problem), the chubby cell is limited to only 5 servers. While this limits fault-tolerance, in practice this is more than enough.&lt;br /&gt;
* Since it has a file-system client interface, many programmers tend to abuse the system and need education (even inside Google)&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=20145</id>
		<title>DistOS 2015W Session 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=20145"/>
		<updated>2015-04-06T04:57:24Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* Group 1 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
==Midterm==&lt;br /&gt;
 &lt;br /&gt;
The midterm from last year [http://homeostasis.scs.carleton.ca/~soma/distos/2015w/comp4000-2014w-midterm.pdf is now available].&lt;br /&gt;
&lt;br /&gt;
==Group 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Kirill, Jamie, Alexis, Veena, Khaled, Hassan&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! FARSITE&lt;br /&gt;
! OceanStore&lt;br /&gt;
|-&lt;br /&gt;
! Fault Tolerance&lt;br /&gt;
| Used Byzantine Fault Tolerance Algorithm - Did not manage well&lt;br /&gt;
| Used Byzantine Fault Tolerance Algorithm - Did not manage well&lt;br /&gt;
|-&lt;br /&gt;
! Cryptography&lt;br /&gt;
| Trusted Certificates&lt;br /&gt;
| A strong cryptographic algorithm on read-only operations&lt;br /&gt;
|-&lt;br /&gt;
! Implementation&lt;br /&gt;
| Did not mention what programming they used, but it was based on Windows. They did not implement the file system&lt;br /&gt;
| Implemented in Java&lt;br /&gt;
|-&lt;br /&gt;
! Scalability&lt;br /&gt;
| Scalable to a University or large corporations, maximum 10&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;&lt;br /&gt;
| Worldwide scalability, maximum 10&amp;lt;sup&amp;gt;10&amp;lt;/sup&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! File Usage&lt;br /&gt;
| Was designed for general purpose files&lt;br /&gt;
| Was designed for small file sizes&lt;br /&gt;
|-&lt;br /&gt;
! Scope&lt;br /&gt;
| All clients sharing the available resources&lt;br /&gt;
| Transient centralized service&lt;br /&gt;
|-&lt;br /&gt;
! Object Model&lt;br /&gt;
| Didn&#039;t use the object model&lt;br /&gt;
| Used the object model&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Group 2==&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Apoorv, Ambalica, Ashley, Eric, Mert, Shivjot&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot; class=&amp;quot;wikitable&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;&#039;&#039;&#039;Farsite&#039;&#039;&#039;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;&#039;&#039;&#039;OceanStore&#039;&#039;&#039;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Implemented Content Leases&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Update Model handled data consistency, no Leases&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Single tier, peer to peer model&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Two tier, server client model&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Scope of ten to the five&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Global Scope (ten to the ten)&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Cryptographic public, private key security&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Read and write privileges&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Randomized data replication&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Nomadic Data concept&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Group 3== &lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: DANY, MOE, DEEP, SAMEER, TROY&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
== FARSITE ==&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Security&#039;&#039;&#039; &lt;br /&gt;
•	Cascading certificates system through directory hierarchy &lt;br /&gt;
•	 Keys &lt;br /&gt;
•	Three types of certificates &lt;br /&gt;
•	CFS required to authorized certificate&lt;br /&gt;
•       Because directory groups only modify their shared state via a Byzantine-fault-tolerant protocol, we trust the group not to make &lt;br /&gt;
        an incorrect update to directory metadata. This metadata includes an access control list (ACL) of public keys of all users&lt;br /&gt;
        who are authorized writers to that directory and to files in it&lt;br /&gt;
•       Both file content and user-sensitive metadata (meaning file and directory names) are encrypted for privacy.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;System Architecture&#039;&#039;&#039; &lt;br /&gt;
•	Client Monitor, directory group, file host&lt;br /&gt;
•	When space runs out in directory group, delegate’s ownership to sub tree to other delegate group. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
== OCEANSTORE ==&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Security&#039;&#039;&#039; &lt;br /&gt;
•	GUID and ACLs used for write, encryption used for reads.&lt;br /&gt;
•       To prevent unauthorized reads, it encrypts&lt;br /&gt;
        all data in the system that is not completely public and distributes the encryption key to those users with read permission&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=20144</id>
		<title>DistOS 2015W Session 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=20144"/>
		<updated>2015-04-06T04:56:10Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* Google File System */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= The Clouds Distributed Operating System =&lt;br /&gt;
It is a distributed OS running on a set of computers that are interconnected by a group of network. It basically unifies different computers into a single component.&lt;br /&gt;
&lt;br /&gt;
The OS is based on 2 patterns:&lt;br /&gt;
* Message Based OS&lt;br /&gt;
* Object Based  OS&lt;br /&gt;
&lt;br /&gt;
== Object Thread Model ==&lt;br /&gt;
&lt;br /&gt;
The structure of this is based on object thread model. It has set of objects which are defined by the class. Objects respond to messages. Sending message to object causes object to execute the method and then reply back. &lt;br /&gt;
&lt;br /&gt;
The system has &#039;&#039;&#039;active objects&#039;&#039;&#039; and &#039;&#039;&#039;passive objects&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
# Active objects are the objects which have one or more processes associated with them and further they can communicate with the external environment. &lt;br /&gt;
# Passive objects are those that currently do not have an active thread executing in them.&lt;br /&gt;
&lt;br /&gt;
The content of the Clouds data is long lived. Since the memory is implemented as a single-level store, the data exists forever and can survive system crashes and shut downs.&lt;br /&gt;
&lt;br /&gt;
== Threads ==&lt;br /&gt;
&lt;br /&gt;
The threads are the logical path of execution that traverse objects and executes code in them. The Clouds thread is not bound to a single address space. Several threads can enter an object simultaneously and execute concurrently. The nature of the Clouds object prohibits a thread from accessing any data outside the current address space in which it is executing.&lt;br /&gt;
&lt;br /&gt;
== Interaction Between Objects and Threads ==&lt;br /&gt;
&lt;br /&gt;
# Inter object interfaces are procedural&lt;br /&gt;
# Invocations work across machine boundaries&lt;br /&gt;
# Objects in clouds unify concept of persistent storage and memory to create address space, thus making the programming simpler.&lt;br /&gt;
# Control flow achieved by threads invoking objects.&lt;br /&gt;
&lt;br /&gt;
== Clouds Environment ==&lt;br /&gt;
&lt;br /&gt;
# Integrates set of homogeneous machines into one seamless environment&lt;br /&gt;
# There are three logical categories of machines- Compute Server, User Workstation and Data server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Plan 9 =&lt;br /&gt;
&lt;br /&gt;
Plan 9 is a general purpose, multi-user and mobile computing environment physically distributed across machines. Development of the system began in the late 1980s. The system was built at Bell Labs - the birth place of Unix. The original Unix OS had no support for networking, and there were many attempts over the years by others to create distributed systems with Unix compatibility. Plan 9, however, is distributed systems done following the original Unix philosophy.&lt;br /&gt;
&lt;br /&gt;
The goals of this system were:&lt;br /&gt;
# To build a distributed system that can be centrally administered.&lt;br /&gt;
# To be cost effective using cheap, modern microcomputers. &lt;br /&gt;
&lt;br /&gt;
The distribution itself is transparent to most programs. This is made possible by two properties:&lt;br /&gt;
# A per process group namespace.&lt;br /&gt;
# Uniform access to most resources by representing them as a file.&lt;br /&gt;
&lt;br /&gt;
== Unix Compatibility ==&lt;br /&gt;
&lt;br /&gt;
The commands, libraries and system calls are similar to that of Unix and therefore a casual user cannot distinguish between these two. The problems in UNIX were too deep to fix but still the various ideas were brought along. The problems addressed badly by UNIX were improved. Old tools were dropped and others were polished and reused.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Similaritieis with the UNIX ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;shell&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Various C compilers&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Unique Features ==&lt;br /&gt;
&lt;br /&gt;
What actually distinguishes Plan 9 is its &#039;&#039;&#039;organization&#039;&#039;&#039;. Plan 9 is divided along the lines of service function.&lt;br /&gt;
* CPU services and terminals use same kernel.&lt;br /&gt;
* Users may choose to run programs locally or remotely on CPU servers.&lt;br /&gt;
* It lets the user choose whether they want a distributed or centralized system.&lt;br /&gt;
&lt;br /&gt;
The design of Plan 9 is based on 3 principles:&lt;br /&gt;
# Resources are named and accessed like files in hierarchical file system.&lt;br /&gt;
# Standard protocol 9P.&lt;br /&gt;
# Disjoint hierarchical provided by different services are joined together into single private hierarchical file name space.&lt;br /&gt;
&lt;br /&gt;
=== Virtual Namespaces ===&lt;br /&gt;
&lt;br /&gt;
In a virtual namespace, a user boots a terminal or connects to a CPU server and then a new process group is created. Processes in group can either add to or rearrange their name space using two system calls - mount and bind.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Mount&#039;&#039;&#039; is used to attach new file system to a point in name space.&lt;br /&gt;
* &#039;&#039;&#039;Bind&#039;&#039;&#039; is used to attach a kernel resident (existing, mounted) file system to name space and also arrange pieces of name space.&lt;br /&gt;
* There is also &#039;&#039;&#039;unbind&#039;&#039;&#039; which undoes the effects of the other two calls.&lt;br /&gt;
&lt;br /&gt;
Namespaces in Plan 9 are on a per-process basis. While everything had a way reference resources with a unique name, using mount and bind every process could build a custom namespace as they saw fit.&lt;br /&gt;
&lt;br /&gt;
Since most resources are in the form of files (and folders), the term &#039;&#039;namespace&#039;&#039; really only refers to the filesystem layout.&lt;br /&gt;
&lt;br /&gt;
=== Parallel Programming ===&lt;br /&gt;
Parallel programming was supported in two ways:&lt;br /&gt;
* Kernel provides simple process model and carefully designed system calls for synchronization.&lt;br /&gt;
* Programming language supports concurrent programming.&lt;br /&gt;
&lt;br /&gt;
== Legacy ==&lt;br /&gt;
&lt;br /&gt;
Even though Plan 9 is no longer developed, the good ideas from the system still exist today. For example, the &#039;&#039;/proc&#039;&#039; virtual filesystem which displays current process information in the form of files exists in modern Linux kernels.&lt;br /&gt;
&lt;br /&gt;
= Google File System =&lt;br /&gt;
&lt;br /&gt;
It is scalable, distributed file system for large, data intensive applications. It is crafted to Google&#039;s unique needs as a search engine company.&lt;br /&gt;
&lt;br /&gt;
Unlike most filesystems, GFS must be implemented by individual applications and is not part of the kernel. While this introduces some technical overhead, it gives the system more freedom to implement or not implement certain non-standard features.&lt;br /&gt;
&lt;br /&gt;
== Architecture ==&lt;br /&gt;
&lt;br /&gt;
The architecture of the Google file system consists of a single master, multiple chunk-servers and multiple clients. Chunk servers are used to store the data in uniformly sized chunks. Each chunk is identified by globally unique 64 bit handle assigned by master at the time of creation. The chunks are split into 64KB blocks, each with its own hashsum for data integrity checks. The chunks are replicated between servers, 3 by default. The master maintains all the file system metadata which includes the namespace and chunk location.&lt;br /&gt;
&lt;br /&gt;
Each of the chunks is 64 MB large (contrast this to the typical filesystem sectors of 512 or 4096 bytes) as the system is meant to hold enormous amount of data - namely the internet. The large chunk size is also important for the scalability of the system - the larger the chunk size, the less metadata the master server has to store for any given amount of data. With the current size, the master server is able to store the entirety of the metadata in memory, increasing performance by a significant margin.&lt;br /&gt;
&lt;br /&gt;
== Operation ==&lt;br /&gt;
&lt;br /&gt;
Master and Chunk server communication consists of&lt;br /&gt;
# checking whether there any chunk-server is down&lt;br /&gt;
# checking if any file is corrupted&lt;br /&gt;
# deleting stale chunks&lt;br /&gt;
&lt;br /&gt;
When a client wants to do some operations on the chunks&lt;br /&gt;
# it first asks the master server for the list of servers that store the parts of a file it wants to access&lt;br /&gt;
# it receives a list of chunk servers, with multiple servers for each chunk&lt;br /&gt;
# it finally communicates with the the chunk servers to perform the operation&lt;br /&gt;
&lt;br /&gt;
The system is geared towards appends and sequential reads. This is why the master server responds with multiple server addresses for each chunk - the client can then request a small piece from each server, increasing the data throughput linearly with the number of servers. Writes, in general, are in the form of a special &#039;&#039;append&#039;&#039; system call. When appending, there is no chance that two clients will want to write to the same location at the same time. This helps avoid any potential synchronization issues. If there are multiple appends to the same file at the same time, the chunk servers are free to order them as they wish (chunks on each server are not guaranteed to be byte-for-byte identical). Changes may also be applied multiple times. These issues are left for the application using GFS to resolve themselves. While a problem in the general sense, this is good enough for Google&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
&lt;br /&gt;
GFS is built with failure in mind. The system expects that at any time, there is some server or disk that is malfunctioning. The system deals with the failures as follows.&lt;br /&gt;
&lt;br /&gt;
=== Chunk Servers ===&lt;br /&gt;
&lt;br /&gt;
By default, chunks are replicated to three servers. This exact number depends on the application in question doing the write. When a chunk server finds that some of its data is corrupt, it grabs the data from other servers to repair itself{{Citation needed}}.&lt;br /&gt;
&lt;br /&gt;
=== Master Server ===&lt;br /&gt;
&lt;br /&gt;
For efficiency, there is only a single live master server at a time. While not making the system completely distributed, it avoid many synchronization problems and suits Google&#039;s needs. At any point in time, there are multiple read-only master servers that copy metadata from the currently live master. Should it go down, they will serve any read operations from the clients, until one of the hot spares is promoted to being the new live master server.&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=20143</id>
		<title>DistOS 2015W Session 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=20143"/>
		<updated>2015-04-06T04:54:02Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* Operation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= The Clouds Distributed Operating System =&lt;br /&gt;
It is a distributed OS running on a set of computers that are interconnected by a group of network. It basically unifies different computers into a single component.&lt;br /&gt;
&lt;br /&gt;
The OS is based on 2 patterns:&lt;br /&gt;
* Message Based OS&lt;br /&gt;
* Object Based  OS&lt;br /&gt;
&lt;br /&gt;
== Object Thread Model ==&lt;br /&gt;
&lt;br /&gt;
The structure of this is based on object thread model. It has set of objects which are defined by the class. Objects respond to messages. Sending message to object causes object to execute the method and then reply back. &lt;br /&gt;
&lt;br /&gt;
The system has &#039;&#039;&#039;active objects&#039;&#039;&#039; and &#039;&#039;&#039;passive objects&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
# Active objects are the objects which have one or more processes associated with them and further they can communicate with the external environment. &lt;br /&gt;
# Passive objects are those that currently do not have an active thread executing in them.&lt;br /&gt;
&lt;br /&gt;
The content of the Clouds data is long lived. Since the memory is implemented as a single-level store, the data exists forever and can survive system crashes and shut downs.&lt;br /&gt;
&lt;br /&gt;
== Threads ==&lt;br /&gt;
&lt;br /&gt;
The threads are the logical path of execution that traverse objects and executes code in them. The Clouds thread is not bound to a single address space. Several threads can enter an object simultaneously and execute concurrently. The nature of the Clouds object prohibits a thread from accessing any data outside the current address space in which it is executing.&lt;br /&gt;
&lt;br /&gt;
== Interaction Between Objects and Threads ==&lt;br /&gt;
&lt;br /&gt;
# Inter object interfaces are procedural&lt;br /&gt;
# Invocations work across machine boundaries&lt;br /&gt;
# Objects in clouds unify concept of persistent storage and memory to create address space, thus making the programming simpler.&lt;br /&gt;
# Control flow achieved by threads invoking objects.&lt;br /&gt;
&lt;br /&gt;
== Clouds Environment ==&lt;br /&gt;
&lt;br /&gt;
# Integrates set of homogeneous machines into one seamless environment&lt;br /&gt;
# There are three logical categories of machines- Compute Server, User Workstation and Data server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Plan 9 =&lt;br /&gt;
&lt;br /&gt;
Plan 9 is a general purpose, multi-user and mobile computing environment physically distributed across machines. Development of the system began in the late 1980s. The system was built at Bell Labs - the birth place of Unix. The original Unix OS had no support for networking, and there were many attempts over the years by others to create distributed systems with Unix compatibility. Plan 9, however, is distributed systems done following the original Unix philosophy.&lt;br /&gt;
&lt;br /&gt;
The goals of this system were:&lt;br /&gt;
# To build a distributed system that can be centrally administered.&lt;br /&gt;
# To be cost effective using cheap, modern microcomputers. &lt;br /&gt;
&lt;br /&gt;
The distribution itself is transparent to most programs. This is made possible by two properties:&lt;br /&gt;
# A per process group namespace.&lt;br /&gt;
# Uniform access to most resources by representing them as a file.&lt;br /&gt;
&lt;br /&gt;
== Unix Compatibility ==&lt;br /&gt;
&lt;br /&gt;
The commands, libraries and system calls are similar to that of Unix and therefore a casual user cannot distinguish between these two. The problems in UNIX were too deep to fix but still the various ideas were brought along. The problems addressed badly by UNIX were improved. Old tools were dropped and others were polished and reused.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Similaritieis with the UNIX ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;shell&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Various C compilers&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Unique Features ==&lt;br /&gt;
&lt;br /&gt;
What actually distinguishes Plan 9 is its &#039;&#039;&#039;organization&#039;&#039;&#039;. Plan 9 is divided along the lines of service function.&lt;br /&gt;
* CPU services and terminals use same kernel.&lt;br /&gt;
* Users may choose to run programs locally or remotely on CPU servers.&lt;br /&gt;
* It lets the user choose whether they want a distributed or centralized system.&lt;br /&gt;
&lt;br /&gt;
The design of Plan 9 is based on 3 principles:&lt;br /&gt;
# Resources are named and accessed like files in hierarchical file system.&lt;br /&gt;
# Standard protocol 9P.&lt;br /&gt;
# Disjoint hierarchical provided by different services are joined together into single private hierarchical file name space.&lt;br /&gt;
&lt;br /&gt;
=== Virtual Namespaces ===&lt;br /&gt;
&lt;br /&gt;
In a virtual namespace, a user boots a terminal or connects to a CPU server and then a new process group is created. Processes in group can either add to or rearrange their name space using two system calls - mount and bind.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Mount&#039;&#039;&#039; is used to attach new file system to a point in name space.&lt;br /&gt;
* &#039;&#039;&#039;Bind&#039;&#039;&#039; is used to attach a kernel resident (existing, mounted) file system to name space and also arrange pieces of name space.&lt;br /&gt;
* There is also &#039;&#039;&#039;unbind&#039;&#039;&#039; which undoes the effects of the other two calls.&lt;br /&gt;
&lt;br /&gt;
Namespaces in Plan 9 are on a per-process basis. While everything had a way reference resources with a unique name, using mount and bind every process could build a custom namespace as they saw fit.&lt;br /&gt;
&lt;br /&gt;
Since most resources are in the form of files (and folders), the term &#039;&#039;namespace&#039;&#039; really only refers to the filesystem layout.&lt;br /&gt;
&lt;br /&gt;
=== Parallel Programming ===&lt;br /&gt;
Parallel programming was supported in two ways:&lt;br /&gt;
* Kernel provides simple process model and carefully designed system calls for synchronization.&lt;br /&gt;
* Programming language supports concurrent programming.&lt;br /&gt;
&lt;br /&gt;
== Legacy ==&lt;br /&gt;
&lt;br /&gt;
Even though Plan 9 is no longer developed, the good ideas from the system still exist today. For example, the &#039;&#039;/proc&#039;&#039; virtual filesystem which displays current process information in the form of files exists in modern Linux kernels.&lt;br /&gt;
&lt;br /&gt;
= Google File System =&lt;br /&gt;
&lt;br /&gt;
It is scalable, distributed file system for large, data intensive applications. It is crafted to Google&#039;s unique needs as a search engine company.&lt;br /&gt;
&lt;br /&gt;
Unlike most filesystems, GFS must be implemented by individual applications and is not part of the kernel. While this introduces some technical overhead, it gives the system more freedom to implement or not implement certain non-standard features.&lt;br /&gt;
&lt;br /&gt;
== Architecture ==&lt;br /&gt;
&lt;br /&gt;
The architecture of the Google file system consists of a single master, multiple chunk-servers and multiple clients. Chunk servers are used to store the data in uniformly sized chunks. Each chunk is identified by globally unique 64 bit handle assigned by master at the time of creation. The chunks are split into 64KB blocks, each with its own hashsum for data integrity checks. The chunks are replicated between servers, 3 by default. The master maintains all the file system metadata which includes the namespace and chunk location.&lt;br /&gt;
&lt;br /&gt;
Each of the chunks is 64 MB large (contrast this to the typical filesystem sectors of 512 or 4096 bytes) as the system is meant to hold enormous amount of data - namely the internet. The large chunk size is also important for the scalability of the system - the larger the chunk size, the less metadata the master server has to store for any given amount of data. With the current size, the master server is able to store the entirety of the metadata in memory, increasing performance by a significant margin.&lt;br /&gt;
&lt;br /&gt;
== Operation ==&lt;br /&gt;
&lt;br /&gt;
Master and Chunk server communication consists of&lt;br /&gt;
# checking whether there any chunk-server is down&lt;br /&gt;
# checking if any file is corrupted&lt;br /&gt;
# deleting stale chunks&lt;br /&gt;
&lt;br /&gt;
When a client wants to do some operations on the chunks&lt;br /&gt;
# it first asks the master server for the list of servers that store the parts of a file it wants to access&lt;br /&gt;
# it receives a list of chunk servers, with multiple servers for each chunk&lt;br /&gt;
# it finally communicates with the the chunk servers to perform the operation&lt;br /&gt;
&lt;br /&gt;
The system is geared towards appends and sequential reads. This is why the master server responds with multiple server addresses for each chunk - the client can then request a small piece from each server, increasing the data throughput linearly with the number of servers. Writes, in general, are in the form of a special &#039;&#039;append&#039;&#039; system call. When appending, there is no chance that two clients will want to write to the same location at the same time. This helps avoid any potential synchronization issues. If there are multiple appends to the same file at the same time, the chunk servers are free to order them as they wish (chunks on each server are not guaranteed to be byte-for-byte identical). While a problem in the general sense, this is good enough for Google&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
&lt;br /&gt;
GFS is built with failure in mind. The system expects that at any time, there is some server or disk that is malfunctioning. The system deals with the failures as follows.&lt;br /&gt;
&lt;br /&gt;
=== Chunk Servers ===&lt;br /&gt;
&lt;br /&gt;
By default, chunks are replicated to three servers. This exact number depends on the application in question doing the write. When a chunk server finds that some of its data is corrupt, it grabs the data from other servers to repair itself{{Citation needed}}.&lt;br /&gt;
&lt;br /&gt;
=== Master Server ===&lt;br /&gt;
&lt;br /&gt;
For efficiency, there is only a single live master server at a time. While not making the system completely distributed, it avoid many synchronization problems and suits Google&#039;s needs. At any point in time, there are multiple read-only master servers that copy metadata from the currently live master. Should it go down, they will serve any read operations from the clients, until one of the hot spares is promoted to being the new live master server.&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=20142</id>
		<title>DistOS 2015W Session 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=20142"/>
		<updated>2015-04-06T04:51:02Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* Plan 9 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= The Clouds Distributed Operating System =&lt;br /&gt;
It is a distributed OS running on a set of computers that are interconnected by a group of network. It basically unifies different computers into a single component.&lt;br /&gt;
&lt;br /&gt;
The OS is based on 2 patterns:&lt;br /&gt;
* Message Based OS&lt;br /&gt;
* Object Based  OS&lt;br /&gt;
&lt;br /&gt;
== Object Thread Model ==&lt;br /&gt;
&lt;br /&gt;
The structure of this is based on object thread model. It has set of objects which are defined by the class. Objects respond to messages. Sending message to object causes object to execute the method and then reply back. &lt;br /&gt;
&lt;br /&gt;
The system has &#039;&#039;&#039;active objects&#039;&#039;&#039; and &#039;&#039;&#039;passive objects&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
# Active objects are the objects which have one or more processes associated with them and further they can communicate with the external environment. &lt;br /&gt;
# Passive objects are those that currently do not have an active thread executing in them.&lt;br /&gt;
&lt;br /&gt;
The content of the Clouds data is long lived. Since the memory is implemented as a single-level store, the data exists forever and can survive system crashes and shut downs.&lt;br /&gt;
&lt;br /&gt;
== Threads ==&lt;br /&gt;
&lt;br /&gt;
The threads are the logical path of execution that traverse objects and executes code in them. The Clouds thread is not bound to a single address space. Several threads can enter an object simultaneously and execute concurrently. The nature of the Clouds object prohibits a thread from accessing any data outside the current address space in which it is executing.&lt;br /&gt;
&lt;br /&gt;
== Interaction Between Objects and Threads ==&lt;br /&gt;
&lt;br /&gt;
# Inter object interfaces are procedural&lt;br /&gt;
# Invocations work across machine boundaries&lt;br /&gt;
# Objects in clouds unify concept of persistent storage and memory to create address space, thus making the programming simpler.&lt;br /&gt;
# Control flow achieved by threads invoking objects.&lt;br /&gt;
&lt;br /&gt;
== Clouds Environment ==&lt;br /&gt;
&lt;br /&gt;
# Integrates set of homogeneous machines into one seamless environment&lt;br /&gt;
# There are three logical categories of machines- Compute Server, User Workstation and Data server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Plan 9 =&lt;br /&gt;
&lt;br /&gt;
Plan 9 is a general purpose, multi-user and mobile computing environment physically distributed across machines. Development of the system began in the late 1980s. The system was built at Bell Labs - the birth place of Unix. The original Unix OS had no support for networking, and there were many attempts over the years by others to create distributed systems with Unix compatibility. Plan 9, however, is distributed systems done following the original Unix philosophy.&lt;br /&gt;
&lt;br /&gt;
The goals of this system were:&lt;br /&gt;
# To build a distributed system that can be centrally administered.&lt;br /&gt;
# To be cost effective using cheap, modern microcomputers. &lt;br /&gt;
&lt;br /&gt;
The distribution itself is transparent to most programs. This is made possible by two properties:&lt;br /&gt;
# A per process group namespace.&lt;br /&gt;
# Uniform access to most resources by representing them as a file.&lt;br /&gt;
&lt;br /&gt;
== Unix Compatibility ==&lt;br /&gt;
&lt;br /&gt;
The commands, libraries and system calls are similar to that of Unix and therefore a casual user cannot distinguish between these two. The problems in UNIX were too deep to fix but still the various ideas were brought along. The problems addressed badly by UNIX were improved. Old tools were dropped and others were polished and reused.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Similaritieis with the UNIX ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;shell&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Various C compilers&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Unique Features ==&lt;br /&gt;
&lt;br /&gt;
What actually distinguishes Plan 9 is its &#039;&#039;&#039;organization&#039;&#039;&#039;. Plan 9 is divided along the lines of service function.&lt;br /&gt;
* CPU services and terminals use same kernel.&lt;br /&gt;
* Users may choose to run programs locally or remotely on CPU servers.&lt;br /&gt;
* It lets the user choose whether they want a distributed or centralized system.&lt;br /&gt;
&lt;br /&gt;
The design of Plan 9 is based on 3 principles:&lt;br /&gt;
# Resources are named and accessed like files in hierarchical file system.&lt;br /&gt;
# Standard protocol 9P.&lt;br /&gt;
# Disjoint hierarchical provided by different services are joined together into single private hierarchical file name space.&lt;br /&gt;
&lt;br /&gt;
=== Virtual Namespaces ===&lt;br /&gt;
&lt;br /&gt;
In a virtual namespace, a user boots a terminal or connects to a CPU server and then a new process group is created. Processes in group can either add to or rearrange their name space using two system calls - mount and bind.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Mount&#039;&#039;&#039; is used to attach new file system to a point in name space.&lt;br /&gt;
* &#039;&#039;&#039;Bind&#039;&#039;&#039; is used to attach a kernel resident (existing, mounted) file system to name space and also arrange pieces of name space.&lt;br /&gt;
* There is also &#039;&#039;&#039;unbind&#039;&#039;&#039; which undoes the effects of the other two calls.&lt;br /&gt;
&lt;br /&gt;
Namespaces in Plan 9 are on a per-process basis. While everything had a way reference resources with a unique name, using mount and bind every process could build a custom namespace as they saw fit.&lt;br /&gt;
&lt;br /&gt;
Since most resources are in the form of files (and folders), the term &#039;&#039;namespace&#039;&#039; really only refers to the filesystem layout.&lt;br /&gt;
&lt;br /&gt;
=== Parallel Programming ===&lt;br /&gt;
Parallel programming was supported in two ways:&lt;br /&gt;
* Kernel provides simple process model and carefully designed system calls for synchronization.&lt;br /&gt;
* Programming language supports concurrent programming.&lt;br /&gt;
&lt;br /&gt;
== Legacy ==&lt;br /&gt;
&lt;br /&gt;
Even though Plan 9 is no longer developed, the good ideas from the system still exist today. For example, the &#039;&#039;/proc&#039;&#039; virtual filesystem which displays current process information in the form of files exists in modern Linux kernels.&lt;br /&gt;
&lt;br /&gt;
= Google File System =&lt;br /&gt;
&lt;br /&gt;
It is scalable, distributed file system for large, data intensive applications. It is crafted to Google&#039;s unique needs as a search engine company.&lt;br /&gt;
&lt;br /&gt;
Unlike most filesystems, GFS must be implemented by individual applications and is not part of the kernel. While this introduces some technical overhead, it gives the system more freedom to implement or not implement certain non-standard features.&lt;br /&gt;
&lt;br /&gt;
== Architecture ==&lt;br /&gt;
&lt;br /&gt;
The architecture of the Google file system consists of a single master, multiple chunk-servers and multiple clients. Chunk servers are used to store the data in uniformly sized chunks. Each chunk is identified by globally unique 64 bit handle assigned by master at the time of creation. The chunks are split into 64KB blocks, each with its own hashsum for data integrity checks. The chunks are replicated between servers, 3 by default. The master maintains all the file system metadata which includes the namespace and chunk location.&lt;br /&gt;
&lt;br /&gt;
Each of the chunks is 64 MB large (contrast this to the typical filesystem sectors of 512 or 4096 bytes) as the system is meant to hold enormous amount of data - namely the internet. The large chunk size is also important for the scalability of the system - the larger the chunk size, the less metadata the master server has to store for any given amount of data. With the current size, the master server is able to store the entirety of the metadata in memory, increasing performance by a significant margin.&lt;br /&gt;
&lt;br /&gt;
== Operation ==&lt;br /&gt;
&lt;br /&gt;
Master and Chunk server communication consists of&lt;br /&gt;
# checking whether there is any chunk-server is down.&lt;br /&gt;
# checking if any file is corrupted.&lt;br /&gt;
# deleting stale chunks&lt;br /&gt;
&lt;br /&gt;
When a client want to do some operations on the chunks&lt;br /&gt;
# it first asks the master server for the list of servers that store the parts of a file it wants to access&lt;br /&gt;
# it receives a list of chunk servers, with multiple servers for each chunk&lt;br /&gt;
# it finally communicates with the the chunk servers to perform the operation&lt;br /&gt;
&lt;br /&gt;
The system is geared towards appends and sequential reads. This is why the master server responds with multiple server addresses for each chunk - the client can then request a small piece from each server, increasing the data throughput linearly with the number of servers. Writes, in general, are in the form of a special &#039;&#039;append&#039;&#039; system call. When appending, there is no chance that two clients will want to write to the same location at the same time. This helps avoid any potential synchronization issues. If there are multiple appends to the same file at the same time, the chunk servers are free to order them as they wish (chunks on each server are not guaranteed to be byte-for-byte identical). While a problem in the general sense, this is good enough for Google&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
&lt;br /&gt;
GFS is built with failure in mind. The system expects that at any time, there is some server or disk that is malfunctioning. The system deals with the failures as follows.&lt;br /&gt;
&lt;br /&gt;
=== Chunk Servers ===&lt;br /&gt;
&lt;br /&gt;
By default, chunks are replicated to three servers. This exact number depends on the application in question doing the write. When a chunk server finds that some of its data is corrupt, it grabs the data from other servers to repair itself{{Citation needed}}.&lt;br /&gt;
&lt;br /&gt;
=== Master Server ===&lt;br /&gt;
&lt;br /&gt;
For efficiency, there is only a single live master server at a time. While not making the system completely distributed, it avoid many synchronization problems and suits Google&#039;s needs. At any point in time, there are multiple read-only master servers that copy metadata from the currently live master. Should it go down, they will serve any read operations from the clients, until one of the hot spares is promoted to being the new live master server.&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_4&amp;diff=20141</id>
		<title>DistOS 2015W Session 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_4&amp;diff=20141"/>
		<updated>2015-04-06T04:45:54Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* Andrew File System */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Andrew File System =&lt;br /&gt;
AFS (Andrew File System) was set up as a direct response to NFS. Essentially universities found issues when they tried to scale NFS in a way that would allow them to share files amongst their staff effectively. AFS was more scalable than NFS because read-write operations happened locally before they were committed to the server (data store).&lt;br /&gt;
&lt;br /&gt;
Since AFS copies files locally when they were opened and only sends the data back when they are closed, all operations between opening and closing the file are very fast and do not need to access the network. NFS works with files remotely, so there is no data to transfer when opening/closing the file, making those operations instant.&lt;br /&gt;
&lt;br /&gt;
There are several problems with this design, however:&lt;br /&gt;
* The local system must have enough space to temporarily store the file.&lt;br /&gt;
* Opening and closing the files requires a lot of bandwidth for large files. To read even a single byte, the entire file must be retrieved (later versions remedied this).&lt;br /&gt;
* If the close operation fails, the system will not have the updated version of the file. Many programs are designed around local filesystems, and therefore don&#039;t even check the return value of the close operation (as this is unlikely to fail on a local FS), giving users the false impression that everything went well.&lt;br /&gt;
&lt;br /&gt;
Given all this, AFS was suitable for working with small files, not large ones, limiting its usefulness. It is also notoriously annoying to set up as it is geared towards university-sized networks, further limiting its success.&lt;br /&gt;
&lt;br /&gt;
= Amoeba Operating System =&lt;br /&gt;
&lt;br /&gt;
=== Capablities: ===&lt;br /&gt;
* Pointer to the object&lt;br /&gt;
* Capability assigning right to perform to some operation to the object ticket &lt;br /&gt;
* Communicate wide area network &lt;br /&gt;
* a kind of ticket or key that allows the holder of the capa- bility to perform some (not neces- sarily all) &lt;br /&gt;
* Each user process owns some collection of capabilities, which together define the set of objects it may access and the types of operations that my ne performed on each &lt;br /&gt;
* After the server has performed the operation, it sends back a reply message that unblocks the client &lt;br /&gt;
* Sending messages, blocking and accepting forms the remote procedure call that can be encapsulate using to make entire remote operation look like local procedure &lt;br /&gt;
* Second field:  used by the sever to identify which of its objects is being addressed server port and object number identify object which operation to performed  &lt;br /&gt;
* Generates 48-bit random number     &lt;br /&gt;
* The third field is the right field which contains a bit map telling which operation the holder of the capability  may performed&lt;br /&gt;
* X11 Window management&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Thread Management: ===&lt;br /&gt;
* Same process have multiple thread and each process has its own registered counter and stack &lt;br /&gt;
* Behave like process&lt;br /&gt;
* It can synchronized using mutex semaphore &lt;br /&gt;
* File: Multiple thread, &lt;br /&gt;
* Blocked when there&#039;s multiple threads &lt;br /&gt;
* Buttlet thread the mutex&lt;br /&gt;
* The careful reader may have noticed that user process can pull 813kbytes/sec&lt;br /&gt;
&lt;br /&gt;
= Unique features =&lt;br /&gt;
&lt;br /&gt;
== Pool processors ==&lt;br /&gt;
Pool processors are group of CPUs that are dynamically allocated as user needs. When a program is executed, any of the available processor run.&lt;br /&gt;
&lt;br /&gt;
== Supported architectures ==&lt;br /&gt;
Many different processor architectures are supported including:&lt;br /&gt;
* i80386 (Pentium)&lt;br /&gt;
* 68K&lt;br /&gt;
* SPARC&lt;br /&gt;
&lt;br /&gt;
= The V Distributed System = &lt;br /&gt;
&lt;br /&gt;
* First tent in V design: High Performance communication is the most critical facility for distributed systems.&lt;br /&gt;
* Second; The protocols, not the software, define the system.&lt;br /&gt;
* Third; a relatively small operating system kernel can implement the basic protocols and services providing a simple network-transparent process, address space &amp;amp; communication model.&lt;br /&gt;
&lt;br /&gt;
=== Ideas that significantly affected the design ===&lt;br /&gt;
* Shared Memory.&lt;br /&gt;
* Dealing with group of entities same as they deal w/individual entities.&lt;br /&gt;
* Efficient file caching mechanism using the virtual memory caching mechanism.&lt;br /&gt;
&lt;br /&gt;
=== Design Decisions ===&lt;br /&gt;
* Designed for a cluster of workstations with high speed network access ( only really supports LAN ).&lt;br /&gt;
* Abstract the physical architecture of the participating workstations, by defining common protocols providing well-defined interfaces.&lt;br /&gt;
&lt;br /&gt;
V was run on LAN, and its developers developed a really fast IPC protocol which allowed for it to be a fasted distributed operating system in a small geographic area. Aside from the IPC protocols, V also implemented RPC calls in the background.&lt;br /&gt;
&lt;br /&gt;
V uses the strong consistency model. This model can cause issues with concurrency because in V files are a memory space. Thus two different users accessing the same file and in fact accessing the same memory location. This could result in issues unless there is an effective implementation to deal with multiple versions, etc.&lt;br /&gt;
&lt;br /&gt;
VMTP protocol was used for communication. It supports request-respond behavior. Besides, it provides transparency, group communication facility and flow control. It is pretty much like TCP.&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20140</id>
		<title>DistOS 2015W Session 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20140"/>
		<updated>2015-04-06T04:42:05Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* Unix */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Multics =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Sameer, Shivjot, Ambalica, Veena&lt;br /&gt;
&lt;br /&gt;
It came into being in the 1960s and it completely vanished in 2000s. It was started by Bell, General Electric and MIT but Bell backed out of the project in 1969.&lt;br /&gt;
Multics is a time sharing OS which provides multitasking and multiprogramming. It is not a distributed OS but it a centralized system which was written in machine-specific assembly.&lt;br /&gt;
&lt;br /&gt;
It provides the following features:&lt;br /&gt;
# Utility Computing&lt;br /&gt;
# Access Control Lists&lt;br /&gt;
# Single level storage&lt;br /&gt;
# Dynamic linking&lt;br /&gt;
#* Sharded libraries or files can be loaded and linked to Random Access Memory at run time&lt;br /&gt;
# Hot swapping&lt;br /&gt;
# Multiprocessing System&lt;br /&gt;
# Ring oriented Security&lt;br /&gt;
#* It provides number of levels of authorization within the computer system&lt;br /&gt;
#* Still present in some form today, inside both processors (like x86) and operating systems&lt;br /&gt;
&lt;br /&gt;
= Unix =&lt;br /&gt;
&lt;br /&gt;
Unix in its original conception is a small, minimal API system designed by two guys from Bell Labs. It was essentially an OS that was optimized for the needs of programmers but not much beyond that. The UNIX OS ran on one computer, and terminals ran from that one computer. Thus it is not a distributed operating system as it is centralized and implements time sharing. In fact, it didn&#039;t even have support for networking in the first version.&lt;br /&gt;
&lt;br /&gt;
The C language was created specifically for Unix, as the creators wanted to create a machine-agnostic language for the operating system.&lt;br /&gt;
&lt;br /&gt;
Most features from Unix are still available in present day Unix-based systems. For example, the shell, with its piping capabilities, is still used today in its original form.&lt;br /&gt;
&lt;br /&gt;
= NFS =&lt;br /&gt;
&lt;br /&gt;
NFS is a protocol for working with distributed file systems transparently using RPC. These connections are not secure. Sun wanted to encrypt these RPC connections but encryption would result in government regulations that Sun wanted to avoid in order to sell NFS over seas.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Sun wanted to secure their NFS with encryption, but at the time encryption was regulated like munitions in the United States. Exporting any product that had encryption was impossible, but Sun needed those sales abroad. To avoid these regulations, Sun decided to sell the insecure NFS version of the system.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Mert&lt;br /&gt;
&lt;br /&gt;
= Locus =&lt;br /&gt;
&lt;br /&gt;
# Not scalable&lt;br /&gt;
#* The synchronization algorithms were so slow, they only managed to run it on five computers&lt;br /&gt;
#* Every computer stores a copy of every file&lt;br /&gt;
#* Also used CAS to manage files&lt;br /&gt;
# Not efficient with abstractions&lt;br /&gt;
#* Trying to distribute files and processes&lt;br /&gt;
#Allowed for process migration&lt;br /&gt;
#Transparency&lt;br /&gt;
#* It provided network transparency  to “disguise” its distributed context.&lt;br /&gt;
#Dynamic reconfiguration. (It adapts topology changes.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Locus has lots of similarities with the today&#039;s systems. It uses replication and partitioning which are employed in cloud and distributed systems.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Eric&lt;br /&gt;
&lt;br /&gt;
= Sprite =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Team&#039;&#039;&#039;: Jamie, Hassan, Khaled&lt;br /&gt;
&lt;br /&gt;
Sprite had the following Design Features:&lt;br /&gt;
# Network Transparency&lt;br /&gt;
# Process Migration, file transfer between computers&lt;br /&gt;
#* User could initiate a process migration to an idle machine, and if the machine was no longer idle; due to it being used by another user, the system would take care of the process migration to another machine&lt;br /&gt;
# Handling Cache Consistency&lt;br /&gt;
#* Sequential file sharing ==&amp;gt; By using a version number for each file&lt;br /&gt;
#* Concurrent write sharing ==&amp;gt; Disable cache to clients, enable write-blocking and other methods&lt;br /&gt;
# Implemented a caching system that sped up performance&lt;br /&gt;
# Implemented a log structured file system&lt;br /&gt;
#* They realized that with increasing amounts of RAM in computers which can be used for caching, writes to the disk were the main bottleneck, not reads.&lt;br /&gt;
#* Log structured file-systems are optimized for writes, as changes to previous data are appended at the current position.&lt;br /&gt;
#* This allows for very fast, sequential writes.&lt;br /&gt;
#* Example: SSD (Solid-state disks)&lt;br /&gt;
&lt;br /&gt;
The main features to take away from the Sprite system is that it implemented a log structured file system, and implemented caching to increase performance.&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20139</id>
		<title>DistOS 2015W Session 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20139"/>
		<updated>2015-04-06T04:40:44Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* Sun NFS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Multics =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Sameer, Shivjot, Ambalica, Veena&lt;br /&gt;
&lt;br /&gt;
It came into being in the 1960s and it completely vanished in 2000s. It was started by Bell, General Electric and MIT but Bell backed out of the project in 1969.&lt;br /&gt;
Multics is a time sharing OS which provides multitasking and multiprogramming. It is not a distributed OS but it a centralized system which was written in machine-specific assembly.&lt;br /&gt;
&lt;br /&gt;
It provides the following features:&lt;br /&gt;
# Utility Computing&lt;br /&gt;
# Access Control Lists&lt;br /&gt;
# Single level storage&lt;br /&gt;
# Dynamic linking&lt;br /&gt;
#* Sharded libraries or files can be loaded and linked to Random Access Memory at run time&lt;br /&gt;
# Hot swapping&lt;br /&gt;
# Multiprocessing System&lt;br /&gt;
# Ring oriented Security&lt;br /&gt;
#* It provides number of levels of authorization within the computer system&lt;br /&gt;
#* Still present in some form today, inside both processors (like x86) and operating systems&lt;br /&gt;
&lt;br /&gt;
= Unix =&lt;br /&gt;
&lt;br /&gt;
Unix in its original conception is a small, minimal API system designed by two guys from Bell Labs. It was essentially a OS that would be easy to grasp for an programmer but not much beyond that. The UNIX OS ran on one computer, and terminals ran from that one computer. Thus it is not a distributed operating system as it is centralized and implements time sharing. In fact, it didn&#039;t even have support for networking in the first version.&lt;br /&gt;
&lt;br /&gt;
The C language was created specifically for Unix, as the creators wanted to create a machine-agnostic language for the operating system.&lt;br /&gt;
&lt;br /&gt;
Most features from Unix are still available in present day Unix-based systems. For example, the shell, with its piping capabilities, is still used today in its original form.&lt;br /&gt;
&lt;br /&gt;
= NFS =&lt;br /&gt;
&lt;br /&gt;
NFS is a protocol for working with distributed file systems transparently using RPC. These connections are not secure. Sun wanted to encrypt these RPC connections but encryption would result in government regulations that Sun wanted to avoid in order to sell NFS over seas.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Sun wanted to secure their NFS with encryption, but at the time encryption was regulated like munitions in the United States. Exporting any product that had encryption was impossible, but Sun needed those sales abroad. To avoid these regulations, Sun decided to sell the insecure NFS version of the system.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Mert&lt;br /&gt;
&lt;br /&gt;
= Locus =&lt;br /&gt;
&lt;br /&gt;
# Not scalable&lt;br /&gt;
#* The synchronization algorithms were so slow, they only managed to run it on five computers&lt;br /&gt;
#* Every computer stores a copy of every file&lt;br /&gt;
#* Also used CAS to manage files&lt;br /&gt;
# Not efficient with abstractions&lt;br /&gt;
#* Trying to distribute files and processes&lt;br /&gt;
#Allowed for process migration&lt;br /&gt;
#Transparency&lt;br /&gt;
#* It provided network transparency  to “disguise” its distributed context.&lt;br /&gt;
#Dynamic reconfiguration. (It adapts topology changes.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Locus has lots of similarities with the today&#039;s systems. It uses replication and partitioning which are employed in cloud and distributed systems.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Eric&lt;br /&gt;
&lt;br /&gt;
= Sprite =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Team&#039;&#039;&#039;: Jamie, Hassan, Khaled&lt;br /&gt;
&lt;br /&gt;
Sprite had the following Design Features:&lt;br /&gt;
# Network Transparency&lt;br /&gt;
# Process Migration, file transfer between computers&lt;br /&gt;
#* User could initiate a process migration to an idle machine, and if the machine was no longer idle; due to it being used by another user, the system would take care of the process migration to another machine&lt;br /&gt;
# Handling Cache Consistency&lt;br /&gt;
#* Sequential file sharing ==&amp;gt; By using a version number for each file&lt;br /&gt;
#* Concurrent write sharing ==&amp;gt; Disable cache to clients, enable write-blocking and other methods&lt;br /&gt;
# Implemented a caching system that sped up performance&lt;br /&gt;
# Implemented a log structured file system&lt;br /&gt;
#* They realized that with increasing amounts of RAM in computers which can be used for caching, writes to the disk were the main bottleneck, not reads.&lt;br /&gt;
#* Log structured file-systems are optimized for writes, as changes to previous data are appended at the current position.&lt;br /&gt;
#* This allows for very fast, sequential writes.&lt;br /&gt;
#* Example: SSD (Solid-state disks)&lt;br /&gt;
&lt;br /&gt;
The main features to take away from the Sprite system is that it implemented a log structured file system, and implemented caching to increase performance.&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20138</id>
		<title>DistOS 2015W Session 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20138"/>
		<updated>2015-04-06T04:37:44Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* Multics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Multics =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Sameer, Shivjot, Ambalica, Veena&lt;br /&gt;
&lt;br /&gt;
It came into being in the 1960s and it completely vanished in 2000s. It was started by Bell, General Electric and MIT but Bell backed out of the project in 1969.&lt;br /&gt;
Multics is a time sharing OS which provides multitasking and multiprogramming. It is not a distributed OS but it a centralized system which was written in machine-specific assembly.&lt;br /&gt;
&lt;br /&gt;
It provides the following features:&lt;br /&gt;
# Utility Computing&lt;br /&gt;
# Access Control Lists&lt;br /&gt;
# Single level storage&lt;br /&gt;
# Dynamic linking&lt;br /&gt;
#* Sharded libraries or files can be loaded and linked to Random Access Memory at run time&lt;br /&gt;
# Hot swapping&lt;br /&gt;
# Multiprocessing System&lt;br /&gt;
# Ring oriented Security&lt;br /&gt;
#* It provides number of levels of authorization within the computer system&lt;br /&gt;
#* Still present in some form today, inside both processors (like x86) and operating systems&lt;br /&gt;
&lt;br /&gt;
= Unix =&lt;br /&gt;
&lt;br /&gt;
Unix in its original conception is a small, minimal API system designed by two guys from Bell Labs. It was essentially a OS that would be easy to grasp for an programmer but not much beyond that. The UNIX OS ran on one computer, and terminals ran from that one computer. Thus it is not a distributed operating system as it is centralized and implements time sharing. In fact, it didn&#039;t even have support for networking in the first version.&lt;br /&gt;
&lt;br /&gt;
The C language was created specifically for Unix, as the creators wanted to create a machine-agnostic language for the operating system.&lt;br /&gt;
&lt;br /&gt;
Most features from Unix are still available in present day Unix-based systems. For example, the shell, with its piping capabilities, is still used today in its original form.&lt;br /&gt;
&lt;br /&gt;
= Sun NFS =&lt;br /&gt;
&lt;br /&gt;
The Sun NFS OS implemented networking using RPC connections. These connections are not secure. Sun wanted to encrypt these RPC connections but encryption would result in government regulations that Sun wanted to avoid in order to sell NFS over seas.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Sun wanted to secure their NFS with encryption, but at the time encryption was regulated like munitions in the United States. Exporting any product that had encryption was impossible, but Sun needed those sales abroad. To avoid these regulations, Sun decided to sell the insecure NFS version of the system.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Mert&lt;br /&gt;
&lt;br /&gt;
= Locus =&lt;br /&gt;
&lt;br /&gt;
# Not scalable&lt;br /&gt;
#* The synchronization algorithms were so slow, they only managed to run it on five computers&lt;br /&gt;
#* Every computer stores a copy of every file&lt;br /&gt;
#* Also used CAS to manage files&lt;br /&gt;
# Not efficient with abstractions&lt;br /&gt;
#* Trying to distribute files and processes&lt;br /&gt;
#Allowed for process migration&lt;br /&gt;
#Transparency&lt;br /&gt;
#* It provided network transparency  to “disguise” its distributed context.&lt;br /&gt;
#Dynamic reconfiguration. (It adapts topology changes.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Locus has lots of similarities with the today&#039;s systems. It uses replication and partitioning which are employed in cloud and distributed systems.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Eric&lt;br /&gt;
&lt;br /&gt;
= Sprite =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Team&#039;&#039;&#039;: Jamie, Hassan, Khaled&lt;br /&gt;
&lt;br /&gt;
Sprite had the following Design Features:&lt;br /&gt;
# Network Transparency&lt;br /&gt;
# Process Migration, file transfer between computers&lt;br /&gt;
#* User could initiate a process migration to an idle machine, and if the machine was no longer idle; due to it being used by another user, the system would take care of the process migration to another machine&lt;br /&gt;
# Handling Cache Consistency&lt;br /&gt;
#* Sequential file sharing ==&amp;gt; By using a version number for each file&lt;br /&gt;
#* Concurrent write sharing ==&amp;gt; Disable cache to clients, enable write-blocking and other methods&lt;br /&gt;
# Implemented a caching system that sped up performance&lt;br /&gt;
# Implemented a log structured file system&lt;br /&gt;
#* They realized that with increasing amounts of RAM in computers which can be used for caching, writes to the disk were the main bottleneck, not reads.&lt;br /&gt;
#* Log structured file-systems are optimized for writes, as changes to previous data are appended at the current position.&lt;br /&gt;
#* This allows for very fast, sequential writes.&lt;br /&gt;
#* Example: SSD (Solid-state disks)&lt;br /&gt;
&lt;br /&gt;
The main features to take away from the Sprite system is that it implemented a log structured file system, and implemented caching to increase performance.&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_2&amp;diff=20137</id>
		<title>DistOS 2015W Session 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_2&amp;diff=20137"/>
		<updated>2015-04-06T04:36:36Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* Discussion: Easy on one computer, hard on multiple computers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Reading Response Discussion =&lt;br /&gt;
&lt;br /&gt;
== Mother of All Demos: ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Jason, Kirill, Sravan, Agustin, Hassan Ambalica, Apoorv, Khaled&lt;br /&gt;
&lt;br /&gt;
* 1968, lead by Doug Inglebard&lt;br /&gt;
* Initial public display of many modern technologies&lt;br /&gt;
* One computer with multiple remote terminals&lt;br /&gt;
* Video conferencing&lt;br /&gt;
* Computer mouse (and coined the term)&lt;br /&gt;
* Word processing, rudimentary copy and paste&lt;br /&gt;
* Dynamic file linking (hypertext)&lt;br /&gt;
* Revision control/version control/source control&lt;br /&gt;
* Collaborative real-time editor&lt;br /&gt;
** User privilege control in that user can provide read-only access, read-write privilege to file&lt;br /&gt;
* Corded keyboard&lt;br /&gt;
** Marco keyboard that allows messages quickly, while using mouse at same time&lt;br /&gt;
* Not really a distributed operating system, but great start because multiple users at different terminals could share same resources&lt;br /&gt;
&lt;br /&gt;
== Early Internet ==&lt;br /&gt;
&lt;br /&gt;
== Alto ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Tory, Veena, Sameer, Mert, Deep, Nameet, Moe&lt;br /&gt;
&lt;br /&gt;
* Initially developed around 1973 by Xerox, upgraded over next few years&lt;br /&gt;
* High speed network connectivity (3 Mbps @ 1km distance)&lt;br /&gt;
* Connected up to 256 machines&lt;br /&gt;
* Protocol similar to cross between UDP/TCP, before TCP invented&lt;br /&gt;
* Allowed sharing of printers&lt;br /&gt;
* Also allowed distribution of files across computers (redundancy/reliability)&lt;br /&gt;
* Sort of early cloud&lt;br /&gt;
* Allowed for remote debugging, store error logs&lt;br /&gt;
* Allowed machines to use processing power of others&lt;br /&gt;
* Much of the time it would be idling, which is amazing at a time when computers cost a fortune&lt;br /&gt;
&lt;br /&gt;
= Professor lead discussion 1 = &lt;br /&gt;
&lt;br /&gt;
A true distributed operating system does not actually exist, its more of a dream. &lt;br /&gt;
&lt;br /&gt;
You&#039;ll find in the history of trying to achieve this dream of a distributed operating system there has always been a road block as a result of a technical issue. People would come up with a solution to this technical issue but then they would find that some other technical issue would crop up. For example &#039;The Mother of All Demos&#039; tried to build a distributed operating system but they had no networking. They had to point a television camera at the computer monitor to be able to demo their concept. The early internet came along and dealt with the issue of no networking. However the early internet itself had other technical issues.&lt;br /&gt;
&lt;br /&gt;
A common buzz word during the early days of development was &#039;&#039;&#039;time sharing&#039;&#039;&#039;, which is an old term for multiple processes running simultaneously. However, at the time it referred to multiple users sharing the CPU cycles on a single computer. Today, a single user&#039;s many processes using a single CPU is much more common.&lt;br /&gt;
&lt;br /&gt;
= Discussion: Easy on one computer, hard on multiple computers =&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Instant Messaging&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|-&lt;br /&gt;
| Can&#039;t be out sync&lt;br /&gt;
| Synchronization issues&lt;br /&gt;
* Server-client modal&lt;br /&gt;
** If two or more servers they get requests same time, they have to figure out how to synchronize the data&lt;br /&gt;
* Peer-to-Peer&lt;br /&gt;
** Each peer has to figure out how to sync each message&lt;br /&gt;
|-&lt;br /&gt;
| Only one data store/source&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Mortal Kombat/Twitch Gaming&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
|&lt;br /&gt;
* Full control of all software&lt;br /&gt;
* All possible users are physically co-located&lt;br /&gt;
* Latency minimized&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|&lt;br /&gt;
* Games are fundamentally low-latency. Networking is fundamental high-latency.&lt;br /&gt;
* Input prediction doesn&#039;t work well on twitchy games.&lt;br /&gt;
* Have to handle lying or faulty clients.&lt;br /&gt;
* Have to handle users finding each other.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Photo Album&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Commonalities ===&lt;br /&gt;
* Synchronization&lt;br /&gt;
* Bandwidth&lt;br /&gt;
* Reliability/Fault Tolerance&lt;br /&gt;
* Interoperability&lt;br /&gt;
* Discovery&lt;br /&gt;
* Routing&lt;br /&gt;
** In modern system this issue has been mostly abstracted away&lt;br /&gt;
** Classic example would be Wireless Ad-hoc networking&lt;br /&gt;
&lt;br /&gt;
The above list is not hard on a single system because all the cores have equal access to all the resources. Also as a result of excellent engineering modern single systems can very effective deal with any errors that may result.&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_2&amp;diff=20136</id>
		<title>DistOS 2015W Session 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_2&amp;diff=20136"/>
		<updated>2015-04-06T04:36:00Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* Discussion: Easy on one computer, hard on multiple computers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Reading Response Discussion =&lt;br /&gt;
&lt;br /&gt;
== Mother of All Demos: ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Jason, Kirill, Sravan, Agustin, Hassan Ambalica, Apoorv, Khaled&lt;br /&gt;
&lt;br /&gt;
* 1968, lead by Doug Inglebard&lt;br /&gt;
* Initial public display of many modern technologies&lt;br /&gt;
* One computer with multiple remote terminals&lt;br /&gt;
* Video conferencing&lt;br /&gt;
* Computer mouse (and coined the term)&lt;br /&gt;
* Word processing, rudimentary copy and paste&lt;br /&gt;
* Dynamic file linking (hypertext)&lt;br /&gt;
* Revision control/version control/source control&lt;br /&gt;
* Collaborative real-time editor&lt;br /&gt;
** User privilege control in that user can provide read-only access, read-write privilege to file&lt;br /&gt;
* Corded keyboard&lt;br /&gt;
** Marco keyboard that allows messages quickly, while using mouse at same time&lt;br /&gt;
* Not really a distributed operating system, but great start because multiple users at different terminals could share same resources&lt;br /&gt;
&lt;br /&gt;
== Early Internet ==&lt;br /&gt;
&lt;br /&gt;
== Alto ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Tory, Veena, Sameer, Mert, Deep, Nameet, Moe&lt;br /&gt;
&lt;br /&gt;
* Initially developed around 1973 by Xerox, upgraded over next few years&lt;br /&gt;
* High speed network connectivity (3 Mbps @ 1km distance)&lt;br /&gt;
* Connected up to 256 machines&lt;br /&gt;
* Protocol similar to cross between UDP/TCP, before TCP invented&lt;br /&gt;
* Allowed sharing of printers&lt;br /&gt;
* Also allowed distribution of files across computers (redundancy/reliability)&lt;br /&gt;
* Sort of early cloud&lt;br /&gt;
* Allowed for remote debugging, store error logs&lt;br /&gt;
* Allowed machines to use processing power of others&lt;br /&gt;
* Much of the time it would be idling, which is amazing at a time when computers cost a fortune&lt;br /&gt;
&lt;br /&gt;
= Professor lead discussion 1 = &lt;br /&gt;
&lt;br /&gt;
A true distributed operating system does not actually exist, its more of a dream. &lt;br /&gt;
&lt;br /&gt;
You&#039;ll find in the history of trying to achieve this dream of a distributed operating system there has always been a road block as a result of a technical issue. People would come up with a solution to this technical issue but then they would find that some other technical issue would crop up. For example &#039;The Mother of All Demos&#039; tried to build a distributed operating system but they had no networking. They had to point a television camera at the computer monitor to be able to demo their concept. The early internet came along and dealt with the issue of no networking. However the early internet itself had other technical issues.&lt;br /&gt;
&lt;br /&gt;
A common buzz word during the early days of development was &#039;&#039;&#039;time sharing&#039;&#039;&#039;, which is an old term for multiple processes running simultaneously. However, at the time it referred to multiple users sharing the CPU cycles on a single computer. Today, a single user&#039;s many processes using a single CPU is much more common.&lt;br /&gt;
&lt;br /&gt;
= Discussion: Easy on one computer, hard on multiple computers =&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Instant Messaging&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|-&lt;br /&gt;
| Can&#039;t be out sync&lt;br /&gt;
| Synchronization issues&lt;br /&gt;
* Server-client modal&lt;br /&gt;
** If two or more servers they get requests same time, they have to figure out how to synchronize the data&lt;br /&gt;
* Peer-to-Peer&lt;br /&gt;
** Each peer has to figure out how to sync each message&lt;br /&gt;
|-&lt;br /&gt;
| Only one data store/source&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Mortal Kombat/Twitch Gaming&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
* Full control of all software&lt;br /&gt;
* All possible users are physically co-located&lt;br /&gt;
* Latency minimized&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
* Games are fundamentally low-latency. Networking is fundamental high-latency.&lt;br /&gt;
* Input prediction doesn&#039;t work well on twitchy games.&lt;br /&gt;
* Have to handle lying or faulty clients.&lt;br /&gt;
* Have to handle users finding each other.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Photo Album&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Commonalities ===&lt;br /&gt;
* Synchronization&lt;br /&gt;
* Bandwidth&lt;br /&gt;
* Reliability/Fault Tolerance&lt;br /&gt;
* Interoperability&lt;br /&gt;
* Discovery&lt;br /&gt;
* Routing&lt;br /&gt;
** In modern system this issue has been mostly abstracted away&lt;br /&gt;
** Classic example would be Wireless Ad-hoc networking&lt;br /&gt;
&lt;br /&gt;
The above list is not hard on a single system because all the cores have equal access to all the resources. Also as a result of excellent engineering modern single systems can very effective deal with any errors that may result.&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_12&amp;diff=20135</id>
		<title>DistOS 2015W Session 12</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_12&amp;diff=20135"/>
		<updated>2015-04-06T04:30:42Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* F4 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Haystack=&lt;br /&gt;
* Facebook&#039;s Photo Application Storage System. &lt;br /&gt;
* Previous Fb photo storage based on NFS design. The reason why NFS didn&#039;t work is that it took 3 file-system accesses per logical photo read. Haystack only needs one access.&lt;br /&gt;
*Main goals of Haystack:&lt;br /&gt;
** High throughput with low latency. It uses one disk operation to provide these.&lt;br /&gt;
**Fault tolerance&lt;br /&gt;
**Cost effective&lt;br /&gt;
**Simple&lt;br /&gt;
*Facebook stored all images in haystack with a CDN in front to cache hot data. Haystack still needs to be fast since accessing non-cached data is still common.&lt;br /&gt;
*Haystack reduces the memory used for &#039;&#039;filesystem metadata&#039;&#039; &lt;br /&gt;
*It has 2 types of metadata:&lt;br /&gt;
**&#039;&#039;Application metadata&#039;&#039;&lt;br /&gt;
**&#039;&#039;File System metadata&#039;&#039;&lt;br /&gt;
* The architecture consists of 3 components:&lt;br /&gt;
**Haystack Store&lt;br /&gt;
**Haystack Directory&lt;br /&gt;
**Haystack Cache&lt;br /&gt;
*Pitchfork and bulk sync were used to tolerate faults. theTfault tolerance works in a very profound way to make haystack feasible and reliable&lt;br /&gt;
&lt;br /&gt;
=Comet=&lt;br /&gt;
*Introduced the concept of distributed shared memory (DSM). In a DSM, RAMs from multiple servers would appear as if they are all belonging to one server, allowing better scalability for caching.&lt;br /&gt;
*client and server model maintain consistency using DSM&lt;br /&gt;
*Comet model works by offloading the computation intensive process from the mobile to only one server.&lt;br /&gt;
*The offloading process works by passing the computation intensive process to the server and hold it on the mobile device. Once the process on the server completes, it returns the results and the handle back to the mobile device. In other words, the process does not get physically offloaded to the server but instead it runs on the server and stopped on the mobile device.&lt;br /&gt;
&lt;br /&gt;
=F4=&lt;br /&gt;
* Warm Blob Storage System.&lt;br /&gt;
** Warm Blob is a store for large quantities of immutable data that isn&#039;t frequently accessed, but must still be available.&lt;br /&gt;
** Built to reduce the overhead of haystack for old data that doesn&#039;t need to be quite as available. Generally data that is a few months old is moved from Haystack to Warm Blob.&lt;br /&gt;
** F4 reduce the space usage of Haystack from a replication factor of 3.6 to 2.8 or 2.1 using Reed Solomon coding and XOR coding respectively but still provides consistency.&lt;br /&gt;
** Less robust to data center failures as a result.&lt;br /&gt;
*Reed Solomon coding basically use(10,4) which means 10 data and 4 parity blocks in a stripe, and can thus tolerate losing up to 4 blocks which means it can tolerate 4 rack failure and use 1.4 expansion factor.Two copies of this would be 2* 1.4= 2.8 effective replication factor.&lt;br /&gt;
*XOR coding use(2,1) across three data center and use 1.5 expansion factor which gives 1.5*1.4= 2.1 effective replication factor.&lt;br /&gt;
*The caching mechanism provides the reduction in load on storage system and it makes BLOB scaleable.&lt;br /&gt;
&lt;br /&gt;
=Sapphire=&lt;br /&gt;
*Represents a building block towards building this global distributed systems. The main critique to it is that it didn’t present a specific use case upon which their design is built upon.&lt;br /&gt;
*Sapphire does not show their scalability boundaries. There is no such distributed system model that can be “one size fits all”, most probably it will break in some large scale distributed application.&lt;br /&gt;
*Reaching this global distributed system that address all the distributed OS use cases will be a cumulative work of many big bodies and building it block by block and then this system will evolve by putting all these different building blocks together. In other words, reaching a global distributed system will come from a “bottom up not top down approach” [Somayaji, 2015].&lt;br /&gt;
*The concept of separate application logic from deployment logic helps programmers in making a flexible system. The other important part that makes it as a scalable system was that it is object based and could be integrated with any object oriented language.&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_12&amp;diff=20134</id>
		<title>DistOS 2015W Session 12</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_12&amp;diff=20134"/>
		<updated>2015-04-06T04:24:35Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* Haystack */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Haystack=&lt;br /&gt;
* Facebook&#039;s Photo Application Storage System. &lt;br /&gt;
* Previous Fb photo storage based on NFS design. The reason why NFS didn&#039;t work is that it took 3 file-system accesses per logical photo read. Haystack only needs one access.&lt;br /&gt;
*Main goals of Haystack:&lt;br /&gt;
** High throughput with low latency. It uses one disk operation to provide these.&lt;br /&gt;
**Fault tolerance&lt;br /&gt;
**Cost effective&lt;br /&gt;
**Simple&lt;br /&gt;
*Facebook stored all images in haystack with a CDN in front to cache hot data. Haystack still needs to be fast since accessing non-cached data is still common.&lt;br /&gt;
*Haystack reduces the memory used for &#039;&#039;filesystem metadata&#039;&#039; &lt;br /&gt;
*It has 2 types of metadata:&lt;br /&gt;
**&#039;&#039;Application metadata&#039;&#039;&lt;br /&gt;
**&#039;&#039;File System metadata&#039;&#039;&lt;br /&gt;
* The architecture consists of 3 components:&lt;br /&gt;
**Haystack Store&lt;br /&gt;
**Haystack Directory&lt;br /&gt;
**Haystack Cache&lt;br /&gt;
*Pitchfork and bulk sync were used to tolerate faults. theTfault tolerance works in a very profound way to make haystack feasible and reliable&lt;br /&gt;
&lt;br /&gt;
=Comet=&lt;br /&gt;
*Introduced the concept of distributed shared memory (DSM). In a DSM, RAMs from multiple servers would appear as if they are all belonging to one server, allowing better scalability for caching.&lt;br /&gt;
*client and server model maintain consistency using DSM&lt;br /&gt;
*Comet model works by offloading the computation intensive process from the mobile to only one server.&lt;br /&gt;
*The offloading process works by passing the computation intensive process to the server and hold it on the mobile device. Once the process on the server completes, it returns the results and the handle back to the mobile device. In other words, the process does not get physically offloaded to the server but instead it runs on the server and stopped on the mobile device.&lt;br /&gt;
&lt;br /&gt;
=F4=&lt;br /&gt;
* Warm Blob Storage System.&lt;br /&gt;
** Warm Blob is a immutable data that gets cool very rapidly.&lt;br /&gt;
** F4 reduce the space usage by 3.6 to 2.8 or 2.1 replication factor using Reed Solomon coding and XOR coding respectively but still provides consistency.&lt;br /&gt;
*Reed Solomon coding basically use(10,4)  which means 10 data and 4 parity blocks in a stripe, and can thus tolerate losing up to 4 blocks whch means it can tolerate 4 rack failure and use 1.4 expansion factor.Two copies of this would be 2* 1.4= 2.8 effective replication factor.&lt;br /&gt;
*XOR coding use(2,1) across three data center and use 1.5 expansion factor which gives 1.5*1.4= 2.1 effective replication factor&lt;br /&gt;
*The caching mechanism provides the reduction in load on storage system and it makes BLOB scalable&lt;br /&gt;
*The concept of hot and warm storage is used to make it simple and modular&lt;br /&gt;
&lt;br /&gt;
=Sapphire=&lt;br /&gt;
*Represents a building block towards building this global distributed systems. The main critique to it is that it didn’t present a specific use case upon which their design is built upon.&lt;br /&gt;
*Sapphire does not show their scalability boundaries. There is no such distributed system model that can be “one size fits all”, most probably it will break in some large scale distributed application.&lt;br /&gt;
*Reaching this global distributed system that address all the distributed OS use cases will be a cumulative work of many big bodies and building it block by block and then this system will evolve by putting all these different building blocks together. In other words, reaching a global distributed system will come from a “bottom up not top down approach” [Somayaji, 2015].&lt;br /&gt;
*The concept of separate application logic from deployment logic helps programmers in making a flexible system. The other important part that makes it as a scalable system was that it is object based and could be integrated with any object oriented language.&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebFund_2013W:_Tasks_1&amp;diff=17849</id>
		<title>WebFund 2013W: Tasks 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebFund_2013W:_Tasks_1&amp;diff=17849"/>
		<updated>2013-03-12T21:47:08Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Your first set of tasks are the following:&lt;br /&gt;
* Setup an account on [https://github.com/ github].&lt;br /&gt;
* Validate your student email with github to get a free student account: https://github.com/edu&lt;br /&gt;
* Setup an account on the class wiki (optional).&lt;br /&gt;
* Tell the TA your current ideas on your term project and who you would like to work with.&lt;br /&gt;
* Download and run/install a copy of node.js.  A local copy of the standalone 32-bit windows executable [http://homeostasis.scs.carleton.ca/~soma/webfund-2013w/node.exe here].  Other versions are [http://nodejs.org/download/ here].&lt;br /&gt;
* Run the node.js hello world program as shown in the &amp;quot;Basic HTTP server&amp;quot; in the [http://www.nodebeginner.org/#a-basic-http-server Node Beginner Book].&lt;br /&gt;
&lt;br /&gt;
You should get checked off on these tasks by a TA in tutorial.&lt;br /&gt;
&lt;br /&gt;
==Hints==&lt;br /&gt;
&lt;br /&gt;
* If you just run the node.exe executable by double-clicking on it, you&#039;ll get a &amp;quot;&amp;gt;&amp;quot; prompt.  This is the node read/eval/print loop.  Meaning, you type in JavaScript to it.&lt;br /&gt;
* If you want to run a file with javascript in it in node, start up a command prompt, cd to the directory containing node.exe and your file foo.js, and then run &amp;quot;node foo.js&amp;quot;.&lt;br /&gt;
* In the lab or elsewhere, if you get a firewall prompt, that is normal - node is trying to listen on a port.  That&#039;s exactly what firewalls are supposed to block.  Allow access.  Note that if you check any boxes you&#039;ll need to enter an admin password.  If you just allow access it should work.&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebFund_2013W:_Tasks_1&amp;diff=17848</id>
		<title>WebFund 2013W: Tasks 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebFund_2013W:_Tasks_1&amp;diff=17848"/>
		<updated>2013-03-12T21:46:35Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Your first set of tasks are the following:&lt;br /&gt;
* Setup an account on [https://github.com/ github].&lt;br /&gt;
* Validate your student email with github to get a free student account: https://github.com/edu&lt;br /&gt;
* Setup an account on the class wiki (optional).&lt;br /&gt;
* Tell the TA your current ideas on your term project and who you would like to work with.&lt;br /&gt;
* If you do not have a partner, post a blurb on Piazza to help find one, or reply to an existing solicitation.&lt;br /&gt;
* Download and run/install a copy of node.js.  A local copy of the standalone 32-bit windows executable [http://homeostasis.scs.carleton.ca/~soma/webfund-2013w/node.exe here].  Other versions are [http://nodejs.org/download/ here].&lt;br /&gt;
* Run the node.js hello world program as shown in the &amp;quot;Basic HTTP server&amp;quot; in the [http://www.nodebeginner.org/#a-basic-http-server Node Beginner Book].&lt;br /&gt;
&lt;br /&gt;
You should get checked off on these tasks by a TA in tutorial.&lt;br /&gt;
&lt;br /&gt;
==Hints==&lt;br /&gt;
&lt;br /&gt;
* If you just run the node.exe executable by double-clicking on it, you&#039;ll get a &amp;quot;&amp;gt;&amp;quot; prompt.  This is the node read/eval/print loop.  Meaning, you type in JavaScript to it.&lt;br /&gt;
* If you want to run a file with javascript in it in node, start up a command prompt, cd to the directory containing node.exe and your file foo.js, and then run &amp;quot;node foo.js&amp;quot;.&lt;br /&gt;
* In the lab or elsewhere, if you get a firewall prompt, that is normal - node is trying to listen on a port.  That&#039;s exactly what firewalls are supposed to block.  Allow access.  Note that if you check any boxes you&#039;ll need to enter an admin password.  If you just allow access it should work.&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebFund_2013W:_jQuery_UI&amp;diff=17787</id>
		<title>WebFund 2013W: jQuery UI</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebFund_2013W:_jQuery_UI&amp;diff=17787"/>
		<updated>2013-02-01T20:24:43Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In this tutorial you will examine and modify a simple example page using [http://jqueryui.com/ jQuery UI].&lt;br /&gt;
&lt;br /&gt;
This application uses:&lt;br /&gt;
* jQuery&lt;br /&gt;
* jQuery-UI (jQuery plugin)&lt;br /&gt;
* jQuery-validate (jQuery plugin)&lt;br /&gt;
* Bootstrap &lt;br /&gt;
&lt;br /&gt;
You will need to look up the documentation on these to understand how the application works.&lt;br /&gt;
&lt;br /&gt;
The file [http://homeostasis.scs.carleton.ca/~soma/webfund-2013w/demo-jquery-ui.zip demo-jquery-ui.zip] contains the example index.html file and associated libraries.  Please download this file locally, unzip it, and navigate to the local file &amp;lt;tt&amp;gt;demo-jquery-ui/index.html&amp;lt;/tt&amp;gt; in your browser.  (Note we will not be using node.js in this tutorial).  You should see something like this:&lt;br /&gt;
&lt;br /&gt;
[[File:Demo-jquery-ui-screenshot.png]]&lt;br /&gt;
&lt;br /&gt;
Note that:&lt;br /&gt;
* The country field auto-completes with a few countries.&lt;br /&gt;
* The Birthday field brings up a calendar.&lt;br /&gt;
* If you click the email me button, you get another field for entering your email address.&lt;br /&gt;
* If you hit submit where all the fields are &amp;quot;valid&amp;quot;, they all go blank (and the URL contains the values you entered).  If you try submitting an invalid form, the colors of the invalid entries are shown.&lt;br /&gt;
&lt;br /&gt;
Modify this page as follows:&lt;br /&gt;
# Remove the loading of [http://twitter.github.com/bootstrap/ bootstrap] (JS and style sheet at top of index.hml) - what changed?&lt;br /&gt;
# Make an alert dialog pop up when you hit submit on a valid form that says &amp;quot;Form validated.&amp;quot;&lt;br /&gt;
# Make the email address validate only if it has a &amp;quot;@&amp;quot;.&lt;br /&gt;
# Print a message below the country field that says &amp;quot;invalid Country&amp;quot; if you manually type in a new country that isn&#039;t in the list.&lt;br /&gt;
# Add some of the other widgets that are part of [http://jqueryui.com/ jQuery UI].&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebFund_2013W:_jQuery_UI&amp;diff=17786</id>
		<title>WebFund 2013W: jQuery UI</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebFund_2013W:_jQuery_UI&amp;diff=17786"/>
		<updated>2013-02-01T20:24:20Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In this tutorial you will examine and modify a simple example page using [http://jqueryui.com/ jQuery UI].&lt;br /&gt;
&lt;br /&gt;
This application uses:&lt;br /&gt;
jQuery&lt;br /&gt;
jQuery-UI (jQuery plugin)&lt;br /&gt;
jQuery-validate (jQuery plugin)&lt;br /&gt;
Bootstrap &lt;br /&gt;
&lt;br /&gt;
You will need to look up the documentation on these to understand how the application works.&lt;br /&gt;
&lt;br /&gt;
The file [http://homeostasis.scs.carleton.ca/~soma/webfund-2013w/demo-jquery-ui.zip demo-jquery-ui.zip] contains the example index.html file and associated libraries.  Please download this file locally, unzip it, and navigate to the local file &amp;lt;tt&amp;gt;demo-jquery-ui/index.html&amp;lt;/tt&amp;gt; in your browser.  (Note we will not be using node.js in this tutorial).  You should see something like this:&lt;br /&gt;
&lt;br /&gt;
[[File:Demo-jquery-ui-screenshot.png]]&lt;br /&gt;
&lt;br /&gt;
Note that:&lt;br /&gt;
* The country field auto-completes with a few countries.&lt;br /&gt;
* The Birthday field brings up a calendar.&lt;br /&gt;
* If you click the email me button, you get another field for entering your email address.&lt;br /&gt;
* If you hit submit where all the fields are &amp;quot;valid&amp;quot;, they all go blank (and the URL contains the values you entered).  If you try submitting an invalid form, the colors of the invalid entries are shown.&lt;br /&gt;
&lt;br /&gt;
Modify this page as follows:&lt;br /&gt;
# Remove the loading of [http://twitter.github.com/bootstrap/ bootstrap] (JS and style sheet at top of index.hml) - what changed?&lt;br /&gt;
# Make an alert dialog pop up when you hit submit on a valid form that says &amp;quot;Form validated.&amp;quot;&lt;br /&gt;
# Make the email address validate only if it has a &amp;quot;@&amp;quot;.&lt;br /&gt;
# Print a message below the country field that says &amp;quot;invalid Country&amp;quot; if you manually type in a new country that isn&#039;t in the list.&lt;br /&gt;
# Add some of the other widgets that are part of [http://jqueryui.com/ jQuery UI].&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Github&amp;diff=17774</id>
		<title>Github</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Github&amp;diff=17774"/>
		<updated>2013-01-25T18:20:48Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using github for a school project, it is recommended that you do not fully trust each other to do the right thing. For this reason we recommend the Review To Commit (RTC) pattern. Note RTC is a bit of a misnomer for a git-oriented workflow-- it could more accurately be called Review to Merge.&lt;br /&gt;
&lt;br /&gt;
Here is a diagram of the idealized untrusted development workflow:&lt;br /&gt;
&lt;br /&gt;
[[File:Diagram-1.gif|700px]]&lt;br /&gt;
&lt;br /&gt;
Here is the script we went through (or at least, intended to) in class:&lt;br /&gt;
&lt;br /&gt;
*Change to CarletonU-COMP2406-W2013 account perspective (top-left)&lt;br /&gt;
*Show all repos (right), news feed (center)&lt;br /&gt;
*Choose Demo-Repo (repo hereafter &amp;quot;mothership&amp;quot;)&lt;br /&gt;
*fork to private github account (top right) (repo hereafter &amp;quot;fork&amp;quot;)&lt;br /&gt;
*grab SSH URL&lt;br /&gt;
*git clone [fork&#039;s SSH URL] &lt;br /&gt;
*show contents of directory (note .git, .gitignore, REAMDE.md)&lt;br /&gt;
*create issue in mothership &amp;quot;need index.html&amp;quot;&lt;br /&gt;
*assign issue to self&lt;br /&gt;
*git checkout -b new-index&lt;br /&gt;
*git branch (show new branch)&lt;br /&gt;
*create index.html&lt;br /&gt;
*git status (show index.html needs to be added)&lt;br /&gt;
*git add .&lt;br /&gt;
*git status (show index.html staged for commit)&lt;br /&gt;
*git commit -am &amp;quot;adding index.html, fixes #1&lt;br /&gt;
&lt;br /&gt;
added basic head&lt;br /&gt;
added empty body&amp;quot;&lt;br /&gt;
*git status (nothing to commit)&lt;br /&gt;
*git push origin new-index&lt;br /&gt;
*go to fork&#039;s github page&lt;br /&gt;
*create pull request from fork new-index to mothership master&lt;br /&gt;
*look at pull request in mothership&lt;br /&gt;
*comment on incorrect tag in github diff (pretending you&#039;re a team mate)&lt;br /&gt;
*fix broken tag&lt;br /&gt;
*git status&lt;br /&gt;
*git commit -am &amp;quot;fixing broken tag&amp;quot;&lt;br /&gt;
*git push&lt;br /&gt;
*Note pull request has update self with new commit&lt;br /&gt;
*accept pull request (pretending you&#039;re a team mate)&lt;br /&gt;
*Note #1 has closed itself because it was mentioned in commit with &amp;quot;fixes&amp;quot;&lt;br /&gt;
*git remote -v&lt;br /&gt;
*note &amp;quot;origin&amp;quot; is assigned to fork&lt;br /&gt;
*copy mothership&#039;s SSH URL&lt;br /&gt;
*git help remote (show how to get help on a command)&lt;br /&gt;
*git remote add mothership [mothership&#039;s URL]&lt;br /&gt;
*git checkout master&lt;br /&gt;
*git pull mothership (gets your changes from new-index)&lt;br /&gt;
*git branch -d new-index (deletes new-index)&lt;br /&gt;
*add &amp;quot;foo&amp;quot; to body of index.html in github (pretending you&#039;re a team mate)&lt;br /&gt;
*add &amp;quot;bar&amp;quot; to body of index.html locally&lt;br /&gt;
*git status&lt;br /&gt;
*git commit -am &amp;quot;adding bar&amp;quot;&lt;br /&gt;
*git pull mothership (merge conflict!)&lt;br /&gt;
*resolve merge conflict&lt;br /&gt;
*git commit -am &amp;quot;resolving merge conflict&amp;quot;&lt;br /&gt;
*add &amp;quot;baz&amp;quot; to body of index.html&lt;br /&gt;
*git status (index is staged for commit)&lt;br /&gt;
*git stash (branch is back to nothing changed)&lt;br /&gt;
*git stash pop (baz is back)&lt;br /&gt;
*git commit -am &amp;quot;adding baz&amp;quot; (oh no, I don&#039;t actually like this change)&lt;br /&gt;
*git reset HEAD^ &lt;br /&gt;
*git status (baz is still there, but not commited)&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Github&amp;diff=17773</id>
		<title>Github</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Github&amp;diff=17773"/>
		<updated>2013-01-25T18:20:28Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using github for a school project, it is recommended that you do not fully trust each other to do the right thing. For this reason we recommend the Review To Commit (RTC) pattern. Note RTC is a bit of a misnomer for a git-oriented workflow-- it could more accurately be called Review to Merge.&lt;br /&gt;
&lt;br /&gt;
Here is a diagram of the idealized untrusted development workflow:&lt;br /&gt;
[[File:Diagram-1.gif|800px]]&lt;br /&gt;
&lt;br /&gt;
Here is the script we went through (or at least, intended to) in class:&lt;br /&gt;
&lt;br /&gt;
*Change to CarletonU-COMP2406-W2013 account perspective (top-left)&lt;br /&gt;
*Show all repos (right), news feed (center)&lt;br /&gt;
*Choose Demo-Repo (repo hereafter &amp;quot;mothership&amp;quot;)&lt;br /&gt;
*fork to private github account (top right) (repo hereafter &amp;quot;fork&amp;quot;)&lt;br /&gt;
*grab SSH URL&lt;br /&gt;
*git clone [fork&#039;s SSH URL] &lt;br /&gt;
*show contents of directory (note .git, .gitignore, REAMDE.md)&lt;br /&gt;
*create issue in mothership &amp;quot;need index.html&amp;quot;&lt;br /&gt;
*assign issue to self&lt;br /&gt;
*git checkout -b new-index&lt;br /&gt;
*git branch (show new branch)&lt;br /&gt;
*create index.html&lt;br /&gt;
*git status (show index.html needs to be added)&lt;br /&gt;
*git add .&lt;br /&gt;
*git status (show index.html staged for commit)&lt;br /&gt;
*git commit -am &amp;quot;adding index.html, fixes #1&lt;br /&gt;
&lt;br /&gt;
added basic head&lt;br /&gt;
added empty body&amp;quot;&lt;br /&gt;
*git status (nothing to commit)&lt;br /&gt;
*git push origin new-index&lt;br /&gt;
*go to fork&#039;s github page&lt;br /&gt;
*create pull request from fork new-index to mothership master&lt;br /&gt;
*look at pull request in mothership&lt;br /&gt;
*comment on incorrect tag in github diff (pretending you&#039;re a team mate)&lt;br /&gt;
*fix broken tag&lt;br /&gt;
*git status&lt;br /&gt;
*git commit -am &amp;quot;fixing broken tag&amp;quot;&lt;br /&gt;
*git push&lt;br /&gt;
*Note pull request has update self with new commit&lt;br /&gt;
*accept pull request (pretending you&#039;re a team mate)&lt;br /&gt;
*Note #1 has closed itself because it was mentioned in commit with &amp;quot;fixes&amp;quot;&lt;br /&gt;
*git remote -v&lt;br /&gt;
*note &amp;quot;origin&amp;quot; is assigned to fork&lt;br /&gt;
*copy mothership&#039;s SSH URL&lt;br /&gt;
*git help remote (show how to get help on a command)&lt;br /&gt;
*git remote add mothership [mothership&#039;s URL]&lt;br /&gt;
*git checkout master&lt;br /&gt;
*git pull mothership (gets your changes from new-index)&lt;br /&gt;
*git branch -d new-index (deletes new-index)&lt;br /&gt;
*add &amp;quot;foo&amp;quot; to body of index.html in github (pretending you&#039;re a team mate)&lt;br /&gt;
*add &amp;quot;bar&amp;quot; to body of index.html locally&lt;br /&gt;
*git status&lt;br /&gt;
*git commit -am &amp;quot;adding bar&amp;quot;&lt;br /&gt;
*git pull mothership (merge conflict!)&lt;br /&gt;
*resolve merge conflict&lt;br /&gt;
*git commit -am &amp;quot;resolving merge conflict&amp;quot;&lt;br /&gt;
*add &amp;quot;baz&amp;quot; to body of index.html&lt;br /&gt;
*git status (index is staged for commit)&lt;br /&gt;
*git stash (branch is back to nothing changed)&lt;br /&gt;
*git stash pop (baz is back)&lt;br /&gt;
*git commit -am &amp;quot;adding baz&amp;quot; (oh no, I don&#039;t actually like this change)&lt;br /&gt;
*git reset HEAD^ &lt;br /&gt;
*git status (baz is still there, but not commited)&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Github&amp;diff=17772</id>
		<title>Github</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Github&amp;diff=17772"/>
		<updated>2013-01-25T18:20:14Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using github for a school project, it is recommended that you do not fully trust each other to do the right thing. For this reason we recommend the Review To Commit (RTC) pattern. Note RTC is a bit of a misnomer for a git-oriented workflow-- it could more accurately be called Review to Merge.&lt;br /&gt;
&lt;br /&gt;
Here is a diagram of the idealized untrusted development workflow:&lt;br /&gt;
[[File:Diagram-1.gif|400px]]&lt;br /&gt;
&lt;br /&gt;
Here is the script we went through (or at least, intended to) in class:&lt;br /&gt;
&lt;br /&gt;
*Change to CarletonU-COMP2406-W2013 account perspective (top-left)&lt;br /&gt;
*Show all repos (right), news feed (center)&lt;br /&gt;
*Choose Demo-Repo (repo hereafter &amp;quot;mothership&amp;quot;)&lt;br /&gt;
*fork to private github account (top right) (repo hereafter &amp;quot;fork&amp;quot;)&lt;br /&gt;
*grab SSH URL&lt;br /&gt;
*git clone [fork&#039;s SSH URL] &lt;br /&gt;
*show contents of directory (note .git, .gitignore, REAMDE.md)&lt;br /&gt;
*create issue in mothership &amp;quot;need index.html&amp;quot;&lt;br /&gt;
*assign issue to self&lt;br /&gt;
*git checkout -b new-index&lt;br /&gt;
*git branch (show new branch)&lt;br /&gt;
*create index.html&lt;br /&gt;
*git status (show index.html needs to be added)&lt;br /&gt;
*git add .&lt;br /&gt;
*git status (show index.html staged for commit)&lt;br /&gt;
*git commit -am &amp;quot;adding index.html, fixes #1&lt;br /&gt;
&lt;br /&gt;
added basic head&lt;br /&gt;
added empty body&amp;quot;&lt;br /&gt;
*git status (nothing to commit)&lt;br /&gt;
*git push origin new-index&lt;br /&gt;
*go to fork&#039;s github page&lt;br /&gt;
*create pull request from fork new-index to mothership master&lt;br /&gt;
*look at pull request in mothership&lt;br /&gt;
*comment on incorrect tag in github diff (pretending you&#039;re a team mate)&lt;br /&gt;
*fix broken tag&lt;br /&gt;
*git status&lt;br /&gt;
*git commit -am &amp;quot;fixing broken tag&amp;quot;&lt;br /&gt;
*git push&lt;br /&gt;
*Note pull request has update self with new commit&lt;br /&gt;
*accept pull request (pretending you&#039;re a team mate)&lt;br /&gt;
*Note #1 has closed itself because it was mentioned in commit with &amp;quot;fixes&amp;quot;&lt;br /&gt;
*git remote -v&lt;br /&gt;
*note &amp;quot;origin&amp;quot; is assigned to fork&lt;br /&gt;
*copy mothership&#039;s SSH URL&lt;br /&gt;
*git help remote (show how to get help on a command)&lt;br /&gt;
*git remote add mothership [mothership&#039;s URL]&lt;br /&gt;
*git checkout master&lt;br /&gt;
*git pull mothership (gets your changes from new-index)&lt;br /&gt;
*git branch -d new-index (deletes new-index)&lt;br /&gt;
*add &amp;quot;foo&amp;quot; to body of index.html in github (pretending you&#039;re a team mate)&lt;br /&gt;
*add &amp;quot;bar&amp;quot; to body of index.html locally&lt;br /&gt;
*git status&lt;br /&gt;
*git commit -am &amp;quot;adding bar&amp;quot;&lt;br /&gt;
*git pull mothership (merge conflict!)&lt;br /&gt;
*resolve merge conflict&lt;br /&gt;
*git commit -am &amp;quot;resolving merge conflict&amp;quot;&lt;br /&gt;
*add &amp;quot;baz&amp;quot; to body of index.html&lt;br /&gt;
*git status (index is staged for commit)&lt;br /&gt;
*git stash (branch is back to nothing changed)&lt;br /&gt;
*git stash pop (baz is back)&lt;br /&gt;
*git commit -am &amp;quot;adding baz&amp;quot; (oh no, I don&#039;t actually like this change)&lt;br /&gt;
*git reset HEAD^ &lt;br /&gt;
*git status (baz is still there, but not commited)&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Github&amp;diff=17771</id>
		<title>Github</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Github&amp;diff=17771"/>
		<updated>2013-01-25T18:19:10Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: Created page with &amp;quot;When using github for a school project, it is recommended that you do not fully trust each other to do the right thing. For this reason we recommend the Review To Commit (RTC)...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When using github for a school project, it is recommended that you do not fully trust each other to do the right thing. For this reason we recommend the Review To Commit (RTC) pattern. Note RTC is a bit of a misnomer for a git-oriented workflow-- it could more accurately be called Review to Merge.&lt;br /&gt;
&lt;br /&gt;
Here is a diagram of the idealized untrusted development workflow:&lt;br /&gt;
[[File:Diagram-1.gif]]&lt;br /&gt;
&lt;br /&gt;
Here is the script we went through (or at least, intended to) in class:&lt;br /&gt;
&lt;br /&gt;
*Change to CarletonU-COMP2406-W2013 account perspective (top-left)&lt;br /&gt;
*Show all repos (right), news feed (center)&lt;br /&gt;
*Choose Demo-Repo (repo hereafter &amp;quot;mothership&amp;quot;)&lt;br /&gt;
*fork to private github account (top right) (repo hereafter &amp;quot;fork&amp;quot;)&lt;br /&gt;
*grab SSH URL&lt;br /&gt;
*git clone [fork&#039;s SSH URL] &lt;br /&gt;
*show contents of directory (note .git, .gitignore, REAMDE.md)&lt;br /&gt;
*create issue in mothership &amp;quot;need index.html&amp;quot;&lt;br /&gt;
*assign issue to self&lt;br /&gt;
*git checkout -b new-index&lt;br /&gt;
*git branch (show new branch)&lt;br /&gt;
*create index.html&lt;br /&gt;
*git status (show index.html needs to be added)&lt;br /&gt;
*git add .&lt;br /&gt;
*git status (show index.html staged for commit)&lt;br /&gt;
*git commit -am &amp;quot;adding index.html, fixes #1&lt;br /&gt;
&lt;br /&gt;
added basic head&lt;br /&gt;
added empty body&amp;quot;&lt;br /&gt;
*git status (nothing to commit)&lt;br /&gt;
*git push origin new-index&lt;br /&gt;
*go to fork&#039;s github page&lt;br /&gt;
*create pull request from fork new-index to mothership master&lt;br /&gt;
*look at pull request in mothership&lt;br /&gt;
*comment on incorrect tag in github diff (pretending you&#039;re a team mate)&lt;br /&gt;
*fix broken tag&lt;br /&gt;
*git status&lt;br /&gt;
*git commit -am &amp;quot;fixing broken tag&amp;quot;&lt;br /&gt;
*git push&lt;br /&gt;
*Note pull request has update self with new commit&lt;br /&gt;
*accept pull request (pretending you&#039;re a team mate)&lt;br /&gt;
*Note #1 has closed itself because it was mentioned in commit with &amp;quot;fixes&amp;quot;&lt;br /&gt;
*git remote -v&lt;br /&gt;
*note &amp;quot;origin&amp;quot; is assigned to fork&lt;br /&gt;
*copy mothership&#039;s SSH URL&lt;br /&gt;
*git help remote (show how to get help on a command)&lt;br /&gt;
*git remote add mothership [mothership&#039;s URL]&lt;br /&gt;
*git checkout master&lt;br /&gt;
*git pull mothership (gets your changes from new-index)&lt;br /&gt;
*git branch -d new-index (deletes new-index)&lt;br /&gt;
*add &amp;quot;foo&amp;quot; to body of index.html in github (pretending you&#039;re a team mate)&lt;br /&gt;
*add &amp;quot;bar&amp;quot; to body of index.html locally&lt;br /&gt;
*git status&lt;br /&gt;
*git commit -am &amp;quot;adding bar&amp;quot;&lt;br /&gt;
*git pull mothership (merge conflict!)&lt;br /&gt;
*resolve merge conflict&lt;br /&gt;
*git commit -am &amp;quot;resolving merge conflict&amp;quot;&lt;br /&gt;
*add &amp;quot;baz&amp;quot; to body of index.html&lt;br /&gt;
*git status (index is staged for commit)&lt;br /&gt;
*git stash (branch is back to nothing changed)&lt;br /&gt;
*git stash pop (baz is back)&lt;br /&gt;
*git commit -am &amp;quot;adding baz&amp;quot; (oh no, I don&#039;t actually like this change)&lt;br /&gt;
*git reset HEAD^ &lt;br /&gt;
*git status (baz is still there, but not commited)&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Fundamentals_of_Web_Applications_(Winter_2013)&amp;diff=17770</id>
		<title>Fundamentals of Web Applications (Winter 2013)</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Fundamentals_of_Web_Applications_(Winter_2013)&amp;diff=17770"/>
		<updated>2013-01-25T18:08:14Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: /* Git */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Administration==&lt;br /&gt;
&lt;br /&gt;
The course outline for this course is [http://www.scs.carleton.ca/courses/course_outline.php?Term=Winter&amp;amp;Year=2013&amp;amp;Number=COMP%202406 here].&lt;br /&gt;
&lt;br /&gt;
Course discussions will be on Piazza.  You can sign up [https://piazza.com/carleton.ca/winter2013/comp24062006 here].  Note that Piazza has a &amp;quot;groups&amp;quot; function that can help you find partners for your project.  &#039;&#039;&#039;Also note that piazza asks for your carleton.ca email address, so you can&#039;t directly sign up with any anonymous email address.&#039;&#039;&#039;  If you wish to sign up using an anonymous/throw away email address, please email Prof. Somayaji or a TA and they can enroll that email address manually.&lt;br /&gt;
&lt;br /&gt;
You should get an account on this wiki so you can add to it.  Email Prof. Somayaji to get one with your preferred username and email address to which a password should be sent.  (Note this is not a requirement.)&lt;br /&gt;
&lt;br /&gt;
==Resources==&lt;br /&gt;
&lt;br /&gt;
===JavaScript===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;You should go out and learn the basics of JavaScript on your own.&#039;&#039;&#039;  While we will discuss the language in class, much of that discussion will make more sense if you&#039;ve exposed yourself to the language.  You should also get basic exposure to standard web technologies.&lt;br /&gt;
&lt;br /&gt;
The easiest way to get started with JavaScript and get basic understanding of web technologies is to go through the interactive lessons on [http://codeacademy.com Code Academy].  I suggest you go through their JavaScript, Web Fundamentals, and jQuery tracks.  They shouldn&#039;t take you very long to do given that you already know how to program.&lt;br /&gt;
&lt;br /&gt;
You should get access to &#039;&#039;JavaScript: The Good Parts&#039;&#039; by Douglas Crockford ([http://it-ebooks.info/book/274/.do free PDF download]).  Read it.  It is available through [http://shop.oreilly.com/product/9780596517748.do O&#039;Reilly], [http://my.safaribooksonline.com/9780596517748 Safari Books Online], and the regular online bookstores.  There is even [https://www.inkling.com/store/book/javascript-good-parts-douglas-crockford-1st/ an interactive version] which includes an embedded JavaScript interpreter.  You can get access to Safari Books Online through the Carleton Library (four concurrent users only) or partial access by becoming a member of [https://www.computer.org IEEE Computer Society].&lt;br /&gt;
&lt;br /&gt;
Crockford also has a lot of online resources on JavaScript, including videos of talks he&#039;s given that cover much of the content in his book.  Look at his [http://javascript.crockford.com/ JavaScript page] and this [http://yuiblog.com/crockford/ page of his videos].&lt;br /&gt;
&lt;br /&gt;
Another good book is [http://eloquentjavascript.net/ Eloquent JavaScript: A Modern Introduction to Programming] by Marijn Haverbeke.  A version of this book is available online for free.  The for-sale version is apparently updated and edited.&lt;br /&gt;
&lt;br /&gt;
And of course the standard reference for JavaScript is [http://shop.oreilly.com/product/9780596805531.do JavaScript: The Definitive Guide] by David Flanagan.  It is a big book, but it is comprehensive.&lt;br /&gt;
&lt;br /&gt;
===Node===&lt;br /&gt;
&lt;br /&gt;
You will be building your application in node.js this term.  A good, relatively comprehensive book is [http://shop.oreilly.com/product/0636920024606.do Learning Node] by Shelley Powers.  This book is recommended but not required.  A quick way to get started with node.js is [http://www.nodebeginner.org/ The Node Beginner Book] by Manuel Kiessling.&lt;br /&gt;
&lt;br /&gt;
===Git===&lt;br /&gt;
There is now a [[github]] organization for the course [https://github.com/CarletonU-COMP2406-W2013 here].  The TAs will add your team to this organization.  Please check in your code and documentation regularly to github so the TAs may monitor your progress.&lt;br /&gt;
&lt;br /&gt;
When your team is added, your TA will do the following:&lt;br /&gt;
* Setup a repository named after your team.  This initial repository will have a README and will ignore temporary files generated by node.js.  You will have full administrative access to this repository.&lt;br /&gt;
* Your team will include your TA as a member (along with you and your teammate).&lt;br /&gt;
&lt;br /&gt;
==Lectures==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr valign=&amp;quot;top&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Date&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Topic&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Jan. 8&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Introduction&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Jan. 10&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[WebFund 2013W: Symbols|Symbols]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Jan. 15&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[WebFund 2013W Lecture 3|Lecture 3]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Jan. 17&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[WebFund 2013W Lecture 4|Lecture 4]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Jan. 22&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[WebFund 2013W Lecture 5|Lecture 5]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Jan. 24&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[WebFund 2013W Lecture 6|Lecture 6]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Jan. 29&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[WebFund 2013W Lecture 7|Lecture 7]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Jan. 31&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[WebFund 2013W Lecture 8|Lecture 8]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Tutorials/Weekly Tasks==&lt;br /&gt;
&lt;br /&gt;
Each week you will get a progress grade from 0-4, given to you by a TA.  If you are being diligent, you should be able to get 4&#039;s every week.  The easiest way to get your grade is to come to tutorial and meet with your TA; alternately, you can meet a TA in their office hours or, at their discretion, discuss things with them online.&lt;br /&gt;
&lt;br /&gt;
Initially you can talk to any TA to get your progress grade.  Once groups have been formed, however, you will have an assigned TA who will be tracking your progress throughout the semester.&lt;br /&gt;
&lt;br /&gt;
Below is a schedule with the tasks you need to accomplish each week for everyone.  Note you need to accomplish the task before your next tutorial.  So, if you attend the Monday tutorials, you need to show progress before the following Monday.&lt;br /&gt;
&lt;br /&gt;
After February 1st, all milestones are between you and your TA and will follow those outline in your proposal.  Milestone deliverables and precise due dates may be revised at the discretion of your TA.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr valign=&amp;quot;top&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Date&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Task/Milestone&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Jan. 11,14&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[WebFund 2013W: Tasks 1|Setup Accounts, Run node.js]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Jan. 18,21&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[WebFund 2013W: Task 2|Project Partners, Blog Example]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Jan. 25, 28&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[WebFund 2013W: Web Develper Tools|Web Developer Tools (optional)]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Feb. 1, 1 PM&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[WebFund 2013W: Proposal|Project Proposal Due]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Humor/Quotes==&lt;br /&gt;
&lt;br /&gt;
[http://xkcd.com/327/ Bobby Tables], a funny example of SQL injection.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Scalability is always the answer in this class.&amp;quot; -Alexis&lt;br /&gt;
&lt;br /&gt;
&amp;quot;First axiom of this class is web apps suck.&amp;quot; -[[User:soma |Professor Somayaji]]&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:Diagram-1.gif&amp;diff=17753</id>
		<title>File:Diagram-1.gif</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:Diagram-1.gif&amp;diff=17753"/>
		<updated>2013-01-25T15:56:57Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: Diagram of idealized Github workflow for one user, when working with an untrusted team.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Diagram of idealized Github workflow for one user, when working with an untrusted team.&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebFund_2013W:_Tasks_1&amp;diff=17688</id>
		<title>WebFund 2013W: Tasks 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebFund_2013W:_Tasks_1&amp;diff=17688"/>
		<updated>2013-01-11T19:50:02Z</updated>

		<summary type="html">&lt;p&gt;Abeinges: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Your first set of tasks are the following:&lt;br /&gt;
* Setup an account on [https://piazza.com/carleton.ca/winter2013/comp24062006 Piazza].&lt;br /&gt;
* Setup an account on [https://github.com/ github].&lt;br /&gt;
* Validate your student email with github to get a free student account: https://github.com/edu&lt;br /&gt;
* Setup an account on the class wiki (optional).&lt;br /&gt;
* Tell the TA your current ideas on your term project and who you would like to work with.&lt;br /&gt;
* If you do not have a partner, post a blurb on Piazza to help find one, or reply to an existing solicitation.&lt;br /&gt;
* Download and run/install a copy of node.js.  A local copy of the standalone 32-bit windows executable [http://homeostasis.scs.carleton.ca/~soma/webfund-2013w/node.exe here].  Other versions are [http://nodejs.org/download/ here].&lt;br /&gt;
* Run the node.js hello world program as shown in the &amp;quot;Basic HTTP server&amp;quot; in the [http://www.nodebeginner.org/#a-basic-http-server Node Beginner Book].&lt;/div&gt;</summary>
		<author><name>Abeinges</name></author>
	</entry>
</feed>