<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Shivjot</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Shivjot"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Shivjot"/>
	<updated>2026-05-12T18:07:38Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_12&amp;diff=20072</id>
		<title>DistOS 2015W Session 12</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_12&amp;diff=20072"/>
		<updated>2015-03-31T02:53:20Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: /* Haystack */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Haystack=&lt;br /&gt;
* Facebook&#039;s Photo Application Storage System. &lt;br /&gt;
* Previous Fb photo storage based on NFS design. The reason why NFS dint work is because it gave 3 reads for every photo. The issue here was that they needed 1 read per photo.&lt;br /&gt;
*Main goals of Haystack:&lt;br /&gt;
** High throughput with low latency&lt;br /&gt;
**Fault tolerance&lt;br /&gt;
**Cost effective&lt;br /&gt;
**SImple&lt;br /&gt;
*Facebook utilises CDN to serve popular images and further uses haystack to respond to photo requests in the long tail effectively. &lt;br /&gt;
*Haystack reduces the memory used for &#039;&#039;filesystem metadata&#039;&#039; &lt;br /&gt;
*It has 2 types of metadata:&lt;br /&gt;
**&#039;&#039;Application metadata&#039;&#039;&lt;br /&gt;
**&#039;&#039;File System metadata&#039;&#039;&lt;br /&gt;
* The architecture consists of 3 components:&lt;br /&gt;
**Haystack Store&lt;br /&gt;
**Haystack Directory&lt;br /&gt;
**Haystack Cache&lt;br /&gt;
&lt;br /&gt;
=Comet=&lt;br /&gt;
*Introduced the concept of distributed shared memory (DSM). In a DSM, RAMs from multiple servers would appear as if they are all belonging to one server, allowing better scalability for caching.&lt;br /&gt;
*Comet model works by offloading the computation intensive process from the mobile to only one server.&lt;br /&gt;
*The offloading process works by passing the computation intensive process to the server and hold it on the mobile device. Once the process on the server completes, it returns the results and the handle back to the mobile device. In other words, the process does not get physically offloaded to the server but instead it runs on the server and stopped on the mobile device. &lt;br /&gt;
=F4= &lt;br /&gt;
=Sapphire=&lt;br /&gt;
*Represents a building block towards building this global distributed systems. The main critique to it is that it didn’t present a specific use case upon which their design is built upon.&lt;br /&gt;
*Sapphire does not show their scalability boundaries. There is no such distributed system model that can be “one size fits all”, most probably it will break in some large scale distributed application.&lt;br /&gt;
*Reaching this global distributed system that address all the distributed OS use cases will be a cumulative work of many big bodies and building it block by block and then this system will evolve by putting all these different building blocks together. In other words, reaching a global distributed system will come from a “bottom up not top down approach” [Somayaji, 2015].&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_12&amp;diff=20071</id>
		<title>DistOS 2015W Session 12</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_12&amp;diff=20071"/>
		<updated>2015-03-31T02:52:40Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: /* Haystack */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Haystack=&lt;br /&gt;
* Facebook&#039;s Photo Application Storage System. &lt;br /&gt;
* Previous Fb photo storage based on NFS design. The reason why NFS dint work is because it gave 3 reads for every photo. The issue here was that they needed 1 read per photo.&lt;br /&gt;
*Main goals of Haystack:&lt;br /&gt;
** High throughput with low latency&lt;br /&gt;
**Fault tolerance&lt;br /&gt;
**Cost effective&lt;br /&gt;
**SImple&lt;br /&gt;
*Facebook utilises CDN to serve popular images and further uses haystack to respond to photo requests in the long tail effectively. &lt;br /&gt;
*Haystack reduces the memory used for &#039;&#039;filesystem metadata&#039;&#039; &lt;br /&gt;
*It has 2 types of metadata:&lt;br /&gt;
**&#039;&#039;Application metadata&#039;&#039;&lt;br /&gt;
**&#039;&#039;File System metadata&#039;&#039;&lt;br /&gt;
* The architecture consists of 3 components:&lt;br /&gt;
**Haystack Store&lt;br /&gt;
**Haystack Directory&lt;br /&gt;
**Haystack Cache&lt;br /&gt;
 *&lt;br /&gt;
&lt;br /&gt;
=Comet=&lt;br /&gt;
*Introduced the concept of distributed shared memory (DSM). In a DSM, RAMs from multiple servers would appear as if they are all belonging to one server, allowing better scalability for caching.&lt;br /&gt;
*Comet model works by offloading the computation intensive process from the mobile to only one server.&lt;br /&gt;
*The offloading process works by passing the computation intensive process to the server and hold it on the mobile device. Once the process on the server completes, it returns the results and the handle back to the mobile device. In other words, the process does not get physically offloaded to the server but instead it runs on the server and stopped on the mobile device. &lt;br /&gt;
=F4= &lt;br /&gt;
=Sapphire=&lt;br /&gt;
*Represents a building block towards building this global distributed systems. The main critique to it is that it didn’t present a specific use case upon which their design is built upon.&lt;br /&gt;
*Sapphire does not show their scalability boundaries. There is no such distributed system model that can be “one size fits all”, most probably it will break in some large scale distributed application.&lt;br /&gt;
*Reaching this global distributed system that address all the distributed OS use cases will be a cumulative work of many big bodies and building it block by block and then this system will evolve by putting all these different building blocks together. In other words, reaching a global distributed system will come from a “bottom up not top down approach” [Somayaji, 2015].&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_12&amp;diff=20070</id>
		<title>DistOS 2015W Session 12</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_12&amp;diff=20070"/>
		<updated>2015-03-31T02:45:48Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: /* Haystack */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Haystack=&lt;br /&gt;
* Facebook&#039;s Photo Application Storage System. &lt;br /&gt;
* Previous Fb photo storage based on NFS design. The reason why NFS dint work is because it gave 3 reads for every photo. The issue here was that they needed 1 read per photo.&lt;br /&gt;
*Main goals of Haystack:&lt;br /&gt;
** High throughput&lt;br /&gt;
&lt;br /&gt;
=Comet=&lt;br /&gt;
*Introduced the concept of distributed shared memory (DSM). In a DSM, RAMs from multiple servers would appear as if they are all belonging to one server, allowing better scalability for caching.&lt;br /&gt;
*Comet model works by offloading the computation intensive process from the mobile to only one server.&lt;br /&gt;
*The offloading process works by passing the computation intensive process to the server and hold it on the mobile device. Once the process on the server completes, it returns the results and the handle back to the mobile device. In other words, the process does not get physically offloaded to the server but instead it runs on the server and stopped on the mobile device. &lt;br /&gt;
=F4= &lt;br /&gt;
=Sapphire=&lt;br /&gt;
*Represents a building block towards building this global distributed systems. The main critique to it is that it didn’t present a specific use case upon which their design is built upon.&lt;br /&gt;
*Sapphire does not show their scalability boundaries. There is no such distributed system model that can be “one size fits all”, most probably it will break in some large scale distributed application.&lt;br /&gt;
*Reaching this global distributed system that address all the distributed OS use cases will be a cumulative work of many big bodies and building it block by block and then this system will evolve by putting all these different building blocks together. In other words, reaching a global distributed system will come from a “bottom up not top down approach” [Somayaji, 2015].&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20069</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20069"/>
		<updated>2015-03-31T02:42:27Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: /* Spanner */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
* Google System used for storing data of various Google Products, for instance Google Analytics, Google Finance, Orkut, Personalized Search, Writely, Google Earth and many more&lt;br /&gt;
* Big table is &lt;br /&gt;
** Sparse&lt;br /&gt;
** Persistant&lt;br /&gt;
** Muti dimensional Sorted Map&lt;br /&gt;
*It is indexed by&lt;br /&gt;
** Row Key: Every read or write of data under single row key is atomic. Each row range is called Tablet. Select Row key to get good locality for data access.&lt;br /&gt;
** Column Key: Grouped into sets called Column Families. Forms basic unit of Access Control.All data stored is of same type.Syntax used: &#039;&#039;family:qualifier&#039;&#039;&lt;br /&gt;
** Time Stamp:Each cell consists of multiple versions of same data which are indexed by Timestamps.In order to avoid collisions, Timestamps need to be generated by applications.&lt;br /&gt;
* Big Table &#039;&#039;&#039;API&#039;&#039;&#039;: Provides functions for&lt;br /&gt;
** Creating and Deleting&lt;br /&gt;
*** Tables&lt;br /&gt;
*** Column Families&lt;br /&gt;
**Changing Cluster&lt;br /&gt;
**Changing Table&lt;br /&gt;
**Column Family metadata like Access Control Rights.&lt;br /&gt;
** Set of wrappers which allow Big Data to be used both as&lt;br /&gt;
*** Input source&lt;br /&gt;
***Output Target&lt;br /&gt;
&lt;br /&gt;
== Dynamo==&lt;br /&gt;
* Amazon&#039;s Key Value Store&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
*Sacrifices consistency under certain failure scenarios.&lt;br /&gt;
*Treats failure handling as normal case without impact on availability and performance.&lt;br /&gt;
*Data is partitioned and replicated using consistent hashing and consistency is facilitated by use of object versioning.&lt;br /&gt;
* This system has certain requirements such as: &lt;br /&gt;
** Query Model: Simple read and write operations to data item that are uniquely identified by a key.&lt;br /&gt;
**ACID properties: Atomicity, Consistency, Isolation, Durability.&lt;br /&gt;
**Efficiency: System needs to function on a commodity hardware infrastructure.&lt;br /&gt;
*  Service Level Agreements(SLA): They are a negotiated contract between a client and a service regarding characteristics related to systems. They are used in order to guarantee that in a bounded time period, an application can deliver it&#039;s functionality.&lt;br /&gt;
* System Architecture: It consists of &#039;&#039;System Interface&#039;&#039;, &#039;&#039;Partitioning Algorithm&#039;&#039;, &#039;&#039;Replication&#039;&#039;,&#039;&#039;Data Versioning&#039;&#039;.&lt;br /&gt;
* Successfully handles&lt;br /&gt;
** Server Failure&lt;br /&gt;
** Data Centre Failure&lt;br /&gt;
** Network Partitions&lt;br /&gt;
* Allows service owners to customize their own storage systems according to their storage systems to meet the desired performance, durability and consistency SLAs.&lt;br /&gt;
* Building block for highly available applications.&lt;br /&gt;
&lt;br /&gt;
==Cassandra==&lt;br /&gt;
* Facebook&#039;s storage system to fulfil needs of the Inbox Search Problem&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
*Distributed multi dimensional map indexed by a key&lt;br /&gt;
* In it&#039;s data model:&lt;br /&gt;
** Columns grouped together into sets called column families. Column Families further of 2 types:&lt;br /&gt;
***Simple column families&lt;br /&gt;
***Super column families&lt;br /&gt;
* API consists of :&lt;br /&gt;
** Insert&lt;br /&gt;
**Get&lt;br /&gt;
** Delete&lt;br /&gt;
* System Architecture consists of :&lt;br /&gt;
** Partitioning: Takes place using consistent hashing&lt;br /&gt;
**Replication: Each item replicated at n hosts where &amp;quot;n&amp;quot; is the replication factor configured per system. &lt;br /&gt;
** Membership: Cluster membership is based on Scuttle butt which is a highly efficient anti-entropy Gossip based mechanism.The Membership further has sub part such as:&lt;br /&gt;
***Failure Detection&lt;br /&gt;
**Bootstrapping&lt;br /&gt;
** Scaling the cluster&lt;br /&gt;
&lt;br /&gt;
=Spanner=&lt;br /&gt;
* Google&#039;s scalable, multi version, globally distributed database.&lt;br /&gt;
* Has been built on top of the Google&#039;s Big table.&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface.&lt;br /&gt;
*Main focus is managing cross-datacentre replicated data.&lt;br /&gt;
* Uses True time to guarantee the correctness properties around concurrency control.&lt;br /&gt;
** The timestamps are utilized.&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20068</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20068"/>
		<updated>2015-03-31T02:34:47Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: /* Spanner */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
* Google System used for storing data of various Google Products, for instance Google Analytics, Google Finance, Orkut, Personalized Search, Writely, Google Earth and many more&lt;br /&gt;
* Big table is &lt;br /&gt;
** Sparse&lt;br /&gt;
** Persistant&lt;br /&gt;
** Muti dimensional Sorted Map&lt;br /&gt;
*It is indexed by&lt;br /&gt;
** Row Key: Every read or write of data under single row key is atomic. Each row range is called Tablet. Select Row key to get good locality for data access.&lt;br /&gt;
** Column Key: Grouped into sets called Column Families. Forms basic unit of Access Control.All data stored is of same type.Syntax used: &#039;&#039;family:qualifier&#039;&#039;&lt;br /&gt;
** Time Stamp:Each cell consists of multiple versions of same data which are indexed by Timestamps.In order to avoid collisions, Timestamps need to be generated by applications.&lt;br /&gt;
* Big Table &#039;&#039;&#039;API&#039;&#039;&#039;: Provides functions for&lt;br /&gt;
** Creating and Deleting&lt;br /&gt;
*** Tables&lt;br /&gt;
*** Column Families&lt;br /&gt;
**Changing Cluster&lt;br /&gt;
**Changing Table&lt;br /&gt;
**Column Family metadata like Access Control Rights.&lt;br /&gt;
** Set of wrappers which allow Big Data to be used both as&lt;br /&gt;
*** Input source&lt;br /&gt;
***Output Target&lt;br /&gt;
&lt;br /&gt;
== Dynamo==&lt;br /&gt;
* Amazon&#039;s Key Value Store&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
*Sacrifices consistency under certain failure scenarios.&lt;br /&gt;
*Treats failure handling as normal case without impact on availability and performance.&lt;br /&gt;
*Data is partitioned and replicated using consistent hashing and consistency is facilitated by use of object versioning.&lt;br /&gt;
* This system has certain requirements such as: &lt;br /&gt;
** Query Model: Simple read and write operations to data item that are uniquely identified by a key.&lt;br /&gt;
**ACID properties: Atomicity, Consistency, Isolation, Durability.&lt;br /&gt;
**Efficiency: System needs to function on a commodity hardware infrastructure.&lt;br /&gt;
*  Service Level Agreements(SLA): They are a negotiated contract between a client and a service regarding characteristics related to systems. They are used in order to guarantee that in a bounded time period, an application can deliver it&#039;s functionality.&lt;br /&gt;
* System Architecture: It consists of &#039;&#039;System Interface&#039;&#039;, &#039;&#039;Partitioning Algorithm&#039;&#039;, &#039;&#039;Replication&#039;&#039;,&#039;&#039;Data Versioning&#039;&#039;.&lt;br /&gt;
* Successfully handles&lt;br /&gt;
** Server Failure&lt;br /&gt;
** Data Centre Failure&lt;br /&gt;
** Network Partitions&lt;br /&gt;
* Allows service owners to customize their own storage systems according to their storage systems to meet the desired performance, durability and consistency SLAs.&lt;br /&gt;
* Building block for highly available applications.&lt;br /&gt;
&lt;br /&gt;
==Cassandra==&lt;br /&gt;
* Facebook&#039;s storage system to fulfil needs of the Inbox Search Problem&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
*Distributed multi dimensional map indexed by a key&lt;br /&gt;
* In it&#039;s data model:&lt;br /&gt;
** Columns grouped together into sets called column families. Column Families further of 2 types:&lt;br /&gt;
***Simple column families&lt;br /&gt;
***Super column families&lt;br /&gt;
* API consists of :&lt;br /&gt;
** Insert&lt;br /&gt;
**Get&lt;br /&gt;
** Delete&lt;br /&gt;
* System Architecture consists of :&lt;br /&gt;
** Partitioning: Takes place using consistent hashing&lt;br /&gt;
**Replication: Each item replicated at n hosts where &amp;quot;n&amp;quot; is the replication factor configured per system. &lt;br /&gt;
** Membership: Cluster membership is based on Scuttle butt which is a highly efficient anti-entropy Gossip based mechanism.The Membership further has sub part such as:&lt;br /&gt;
***Failure Detection&lt;br /&gt;
**Bootstrapping&lt;br /&gt;
** Scaling the cluster&lt;br /&gt;
&lt;br /&gt;
=Spanner=&lt;br /&gt;
* Google&#039;s scalable, multi version, globally distributed database.&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface.&lt;br /&gt;
*Main focus is managing cross-datacentre replicated data&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20067</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20067"/>
		<updated>2015-03-31T02:31:24Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: /* Cassandra */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
* Google System used for storing data of various Google Products, for instance Google Analytics, Google Finance, Orkut, Personalized Search, Writely, Google Earth and many more&lt;br /&gt;
* Big table is &lt;br /&gt;
** Sparse&lt;br /&gt;
** Persistant&lt;br /&gt;
** Muti dimensional Sorted Map&lt;br /&gt;
*It is indexed by&lt;br /&gt;
** Row Key: Every read or write of data under single row key is atomic. Each row range is called Tablet. Select Row key to get good locality for data access.&lt;br /&gt;
** Column Key: Grouped into sets called Column Families. Forms basic unit of Access Control.All data stored is of same type.Syntax used: &#039;&#039;family:qualifier&#039;&#039;&lt;br /&gt;
** Time Stamp:Each cell consists of multiple versions of same data which are indexed by Timestamps.In order to avoid collisions, Timestamps need to be generated by applications.&lt;br /&gt;
* Big Table &#039;&#039;&#039;API&#039;&#039;&#039;: Provides functions for&lt;br /&gt;
** Creating and Deleting&lt;br /&gt;
*** Tables&lt;br /&gt;
*** Column Families&lt;br /&gt;
**Changing Cluster&lt;br /&gt;
**Changing Table&lt;br /&gt;
**Column Family metadata like Access Control Rights.&lt;br /&gt;
** Set of wrappers which allow Big Data to be used both as&lt;br /&gt;
*** Input source&lt;br /&gt;
***Output Target&lt;br /&gt;
&lt;br /&gt;
== Dynamo==&lt;br /&gt;
* Amazon&#039;s Key Value Store&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
*Sacrifices consistency under certain failure scenarios.&lt;br /&gt;
*Treats failure handling as normal case without impact on availability and performance.&lt;br /&gt;
*Data is partitioned and replicated using consistent hashing and consistency is facilitated by use of object versioning.&lt;br /&gt;
* This system has certain requirements such as: &lt;br /&gt;
** Query Model: Simple read and write operations to data item that are uniquely identified by a key.&lt;br /&gt;
**ACID properties: Atomicity, Consistency, Isolation, Durability.&lt;br /&gt;
**Efficiency: System needs to function on a commodity hardware infrastructure.&lt;br /&gt;
*  Service Level Agreements(SLA): They are a negotiated contract between a client and a service regarding characteristics related to systems. They are used in order to guarantee that in a bounded time period, an application can deliver it&#039;s functionality.&lt;br /&gt;
* System Architecture: It consists of &#039;&#039;System Interface&#039;&#039;, &#039;&#039;Partitioning Algorithm&#039;&#039;, &#039;&#039;Replication&#039;&#039;,&#039;&#039;Data Versioning&#039;&#039;.&lt;br /&gt;
* Successfully handles&lt;br /&gt;
** Server Failure&lt;br /&gt;
** Data Centre Failure&lt;br /&gt;
** Network Partitions&lt;br /&gt;
* Allows service owners to customize their own storage systems according to their storage systems to meet the desired performance, durability and consistency SLAs.&lt;br /&gt;
* Building block for highly available applications.&lt;br /&gt;
&lt;br /&gt;
==Cassandra==&lt;br /&gt;
* Facebook&#039;s storage system to fulfil needs of the Inbox Search Problem&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
*Distributed multi dimensional map indexed by a key&lt;br /&gt;
* In it&#039;s data model:&lt;br /&gt;
** Columns grouped together into sets called column families. Column Families further of 2 types:&lt;br /&gt;
***Simple column families&lt;br /&gt;
***Super column families&lt;br /&gt;
* API consists of :&lt;br /&gt;
** Insert&lt;br /&gt;
**Get&lt;br /&gt;
** Delete&lt;br /&gt;
* System Architecture consists of :&lt;br /&gt;
** Partitioning: Takes place using consistent hashing&lt;br /&gt;
**Replication: Each item replicated at n hosts where &amp;quot;n&amp;quot; is the replication factor configured per system. &lt;br /&gt;
** Membership: Cluster membership is based on Scuttle butt which is a highly efficient anti-entropy Gossip based mechanism.The Membership further has sub part such as:&lt;br /&gt;
***Failure Detection&lt;br /&gt;
**Bootstrapping&lt;br /&gt;
** Scaling the cluster&lt;br /&gt;
&lt;br /&gt;
=Spanner=&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20066</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20066"/>
		<updated>2015-03-31T02:29:15Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: /* Cassandra */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
* Google System used for storing data of various Google Products, for instance Google Analytics, Google Finance, Orkut, Personalized Search, Writely, Google Earth and many more&lt;br /&gt;
* Big table is &lt;br /&gt;
** Sparse&lt;br /&gt;
** Persistant&lt;br /&gt;
** Muti dimensional Sorted Map&lt;br /&gt;
*It is indexed by&lt;br /&gt;
** Row Key: Every read or write of data under single row key is atomic. Each row range is called Tablet. Select Row key to get good locality for data access.&lt;br /&gt;
** Column Key: Grouped into sets called Column Families. Forms basic unit of Access Control.All data stored is of same type.Syntax used: &#039;&#039;family:qualifier&#039;&#039;&lt;br /&gt;
** Time Stamp:Each cell consists of multiple versions of same data which are indexed by Timestamps.In order to avoid collisions, Timestamps need to be generated by applications.&lt;br /&gt;
* Big Table &#039;&#039;&#039;API&#039;&#039;&#039;: Provides functions for&lt;br /&gt;
** Creating and Deleting&lt;br /&gt;
*** Tables&lt;br /&gt;
*** Column Families&lt;br /&gt;
**Changing Cluster&lt;br /&gt;
**Changing Table&lt;br /&gt;
**Column Family metadata like Access Control Rights.&lt;br /&gt;
** Set of wrappers which allow Big Data to be used both as&lt;br /&gt;
*** Input source&lt;br /&gt;
***Output Target&lt;br /&gt;
&lt;br /&gt;
== Dynamo==&lt;br /&gt;
* Amazon&#039;s Key Value Store&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
*Sacrifices consistency under certain failure scenarios.&lt;br /&gt;
*Treats failure handling as normal case without impact on availability and performance.&lt;br /&gt;
*Data is partitioned and replicated using consistent hashing and consistency is facilitated by use of object versioning.&lt;br /&gt;
* This system has certain requirements such as: &lt;br /&gt;
** Query Model: Simple read and write operations to data item that are uniquely identified by a key.&lt;br /&gt;
**ACID properties: Atomicity, Consistency, Isolation, Durability.&lt;br /&gt;
**Efficiency: System needs to function on a commodity hardware infrastructure.&lt;br /&gt;
*  Service Level Agreements(SLA): They are a negotiated contract between a client and a service regarding characteristics related to systems. They are used in order to guarantee that in a bounded time period, an application can deliver it&#039;s functionality.&lt;br /&gt;
* System Architecture: It consists of &#039;&#039;System Interface&#039;&#039;, &#039;&#039;Partitioning Algorithm&#039;&#039;, &#039;&#039;Replication&#039;&#039;,&#039;&#039;Data Versioning&#039;&#039;.&lt;br /&gt;
* Successfully handles&lt;br /&gt;
** Server Failure&lt;br /&gt;
** Data Centre Failure&lt;br /&gt;
** Network Partitions&lt;br /&gt;
* Allows service owners to customize their own storage systems according to their storage systems to meet the desired performance, durability and consistency SLAs.&lt;br /&gt;
* Building block for highly available applications.&lt;br /&gt;
&lt;br /&gt;
==Cassandra==&lt;br /&gt;
* Facebook&#039;s storage system to fulfil needs of the Inbox Search Problem&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
*Distributed multi dimensional map indexed by a key&lt;br /&gt;
* In it&#039;s data model:&lt;br /&gt;
** Columns grouped together into sets called column families. Column Families further of 2 types:&lt;br /&gt;
***Simple column families&lt;br /&gt;
***Super column families&lt;br /&gt;
* API consists of :&lt;br /&gt;
** Insert&lt;br /&gt;
**Get&lt;br /&gt;
** Delete&lt;br /&gt;
* System Architecture consists of :&lt;br /&gt;
** Partitioning: Takes place using consistent hashing&lt;br /&gt;
**Replication: Each item replicated at n hosts where &amp;quot;n&amp;quot; is the replication factor configured per system. &lt;br /&gt;
** Membership&lt;br /&gt;
**Bootstrapping&lt;br /&gt;
** Scaling the cluster&lt;br /&gt;
&lt;br /&gt;
=Spanner=&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20065</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20065"/>
		<updated>2015-03-31T02:26:07Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: /* Cassandra */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
* Google System used for storing data of various Google Products, for instance Google Analytics, Google Finance, Orkut, Personalized Search, Writely, Google Earth and many more&lt;br /&gt;
* Big table is &lt;br /&gt;
** Sparse&lt;br /&gt;
** Persistant&lt;br /&gt;
** Muti dimensional Sorted Map&lt;br /&gt;
*It is indexed by&lt;br /&gt;
** Row Key: Every read or write of data under single row key is atomic. Each row range is called Tablet. Select Row key to get good locality for data access.&lt;br /&gt;
** Column Key: Grouped into sets called Column Families. Forms basic unit of Access Control.All data stored is of same type.Syntax used: &#039;&#039;family:qualifier&#039;&#039;&lt;br /&gt;
** Time Stamp:Each cell consists of multiple versions of same data which are indexed by Timestamps.In order to avoid collisions, Timestamps need to be generated by applications.&lt;br /&gt;
* Big Table &#039;&#039;&#039;API&#039;&#039;&#039;: Provides functions for&lt;br /&gt;
** Creating and Deleting&lt;br /&gt;
*** Tables&lt;br /&gt;
*** Column Families&lt;br /&gt;
**Changing Cluster&lt;br /&gt;
**Changing Table&lt;br /&gt;
**Column Family metadata like Access Control Rights.&lt;br /&gt;
** Set of wrappers which allow Big Data to be used both as&lt;br /&gt;
*** Input source&lt;br /&gt;
***Output Target&lt;br /&gt;
&lt;br /&gt;
== Dynamo==&lt;br /&gt;
* Amazon&#039;s Key Value Store&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
*Sacrifices consistency under certain failure scenarios.&lt;br /&gt;
*Treats failure handling as normal case without impact on availability and performance.&lt;br /&gt;
*Data is partitioned and replicated using consistent hashing and consistency is facilitated by use of object versioning.&lt;br /&gt;
* This system has certain requirements such as: &lt;br /&gt;
** Query Model: Simple read and write operations to data item that are uniquely identified by a key.&lt;br /&gt;
**ACID properties: Atomicity, Consistency, Isolation, Durability.&lt;br /&gt;
**Efficiency: System needs to function on a commodity hardware infrastructure.&lt;br /&gt;
*  Service Level Agreements(SLA): They are a negotiated contract between a client and a service regarding characteristics related to systems. They are used in order to guarantee that in a bounded time period, an application can deliver it&#039;s functionality.&lt;br /&gt;
* System Architecture: It consists of &#039;&#039;System Interface&#039;&#039;, &#039;&#039;Partitioning Algorithm&#039;&#039;, &#039;&#039;Replication&#039;&#039;,&#039;&#039;Data Versioning&#039;&#039;.&lt;br /&gt;
* Successfully handles&lt;br /&gt;
** Server Failure&lt;br /&gt;
** Data Centre Failure&lt;br /&gt;
** Network Partitions&lt;br /&gt;
* Allows service owners to customize their own storage systems according to their storage systems to meet the desired performance, durability and consistency SLAs.&lt;br /&gt;
* Building block for highly available applications.&lt;br /&gt;
&lt;br /&gt;
==Cassandra==&lt;br /&gt;
* Facebook&#039;s storage system to fulfil needs of the Inbox Search Problem&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
*Distributed multi dimensional map indexed by a key&lt;br /&gt;
* In it&#039;s data model:&lt;br /&gt;
** Columns grouped together into sets called column families. Column Families further of 2 types:&lt;br /&gt;
***Simple column families&lt;br /&gt;
***Super column families&lt;br /&gt;
* API consists of :&lt;br /&gt;
** Insert&lt;br /&gt;
**Get&lt;br /&gt;
** Delete&lt;br /&gt;
*&lt;br /&gt;
&lt;br /&gt;
=Spanner=&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20064</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20064"/>
		<updated>2015-03-31T02:15:20Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: /* Dynamo */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
* Google System used for storing data of various Google Products, for instance Google Analytics, Google Finance, Orkut, Personalized Search, Writely, Google Earth and many more&lt;br /&gt;
* Big table is &lt;br /&gt;
** Sparse&lt;br /&gt;
** Persistant&lt;br /&gt;
** Muti dimensional Sorted Map&lt;br /&gt;
*It is indexed by&lt;br /&gt;
** Row Key: Every read or write of data under single row key is atomic. Each row range is called Tablet. Select Row key to get good locality for data access.&lt;br /&gt;
** Column Key: Grouped into sets called Column Families. Forms basic unit of Access Control.All data stored is of same type.Syntax used: &#039;&#039;family:qualifier&#039;&#039;&lt;br /&gt;
** Time Stamp:Each cell consists of multiple versions of same data which are indexed by Timestamps.In order to avoid collisions, Timestamps need to be generated by applications.&lt;br /&gt;
* Big Table &#039;&#039;&#039;API&#039;&#039;&#039;: Provides functions for&lt;br /&gt;
** Creating and Deleting&lt;br /&gt;
*** Tables&lt;br /&gt;
*** Column Families&lt;br /&gt;
**Changing Cluster&lt;br /&gt;
**Changing Table&lt;br /&gt;
**Column Family metadata like Access Control Rights.&lt;br /&gt;
** Set of wrappers which allow Big Data to be used both as&lt;br /&gt;
*** Input source&lt;br /&gt;
***Output Target&lt;br /&gt;
&lt;br /&gt;
== Dynamo==&lt;br /&gt;
* Amazon&#039;s Key Value Store&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
*Sacrifices consistency under certain failure scenarios.&lt;br /&gt;
*Treats failure handling as normal case without impact on availability and performance.&lt;br /&gt;
*Data is partitioned and replicated using consistent hashing and consistency is facilitated by use of object versioning.&lt;br /&gt;
* This system has certain requirements such as: &lt;br /&gt;
** Query Model: Simple read and write operations to data item that are uniquely identified by a key.&lt;br /&gt;
**ACID properties: Atomicity, Consistency, Isolation, Durability.&lt;br /&gt;
**Efficiency: System needs to function on a commodity hardware infrastructure.&lt;br /&gt;
*  Service Level Agreements(SLA): They are a negotiated contract between a client and a service regarding characteristics related to systems. They are used in order to guarantee that in a bounded time period, an application can deliver it&#039;s functionality.&lt;br /&gt;
* System Architecture: It consists of &#039;&#039;System Interface&#039;&#039;, &#039;&#039;Partitioning Algorithm&#039;&#039;, &#039;&#039;Replication&#039;&#039;,&#039;&#039;Data Versioning&#039;&#039;.&lt;br /&gt;
* Successfully handles&lt;br /&gt;
** Server Failure&lt;br /&gt;
** Data Centre Failure&lt;br /&gt;
** Network Partitions&lt;br /&gt;
* Allows service owners to customize their own storage systems according to their storage systems to meet the desired performance, durability and consistency SLAs.&lt;br /&gt;
* Building block for highly available applications.&lt;br /&gt;
&lt;br /&gt;
==Cassandra==&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
=Spanner=&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20062</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20062"/>
		<updated>2015-03-30T23:18:32Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: /* Dynamo */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
* Google System used for storing data of various Google Products, for instance Google Analytics, Google Finance, Orkut, Personalized Search, Writely, Google Earth and many more&lt;br /&gt;
* Big table is &lt;br /&gt;
** Sparse&lt;br /&gt;
** Persistant&lt;br /&gt;
** Muti dimensional Sorted Map&lt;br /&gt;
*It is indexed by&lt;br /&gt;
** Row Key: Every read or write of data under single row key is atomic. Each row range is called Tablet. Select Row key to get good locality for data access.&lt;br /&gt;
** Column Key: Grouped into sets called Column Families. Forms basic unit of Access Control.All data stored is of same type.Syntax used: &#039;&#039;family:qualifier&#039;&#039;&lt;br /&gt;
** Time Stamp:Each cell consists of multiple versions of same data which are indexed by Timestamps.In order to avoid collisions, Timestamps need to be generated by applications.&lt;br /&gt;
* Big Table &#039;&#039;&#039;API&#039;&#039;&#039;: Provides functions for&lt;br /&gt;
** Creating and Deleting&lt;br /&gt;
*** Tables&lt;br /&gt;
*** Column Families&lt;br /&gt;
**Changing Cluster&lt;br /&gt;
**Changing Table&lt;br /&gt;
**Column Family metadata like Access Control Rights.&lt;br /&gt;
** Set of wrappers which allow Big Data to be used both as&lt;br /&gt;
*** Input source&lt;br /&gt;
***Output Target&lt;br /&gt;
&lt;br /&gt;
== Dynamo==&lt;br /&gt;
* Amazon&#039;s Key Value Store&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
*Sacrifices consistency under certain failure scenarios.&lt;br /&gt;
*Treats failure handling as normal case without impact on availability and performance.&lt;br /&gt;
*Data is partitioned and replicated using consistent hashing and consistency is facilitated by use of object versioning.&lt;br /&gt;
* This system has certain requirements such as: &lt;br /&gt;
** Query Model: Simple read and write operations to data item that are uniquely identified by a key.&lt;br /&gt;
**ACID properties: Atomicity, Consistency, Isolation, Durability.&lt;br /&gt;
**Efficiency: System needs to function on a commodity hardware infrastructure.&lt;br /&gt;
* The system consists of Service Level Agreements(SLA), which is a negotiated contract between a client and a service regarding characteristics related to systems. They are used in order to guarantee that in a bounded time period, an application can deliver it&#039;s functionality.&lt;br /&gt;
&lt;br /&gt;
==Cassandra==&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
=Spanner=&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20061</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20061"/>
		<updated>2015-03-30T23:07:54Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: /* Dynamo */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
* Google System used for storing data of various Google Products, for instance Google Analytics, Google Finance, Orkut, Personalized Search, Writely, Google Earth and many more&lt;br /&gt;
* Big table is &lt;br /&gt;
** Sparse&lt;br /&gt;
** Persistant&lt;br /&gt;
** Muti dimensional Sorted Map&lt;br /&gt;
*It is indexed by&lt;br /&gt;
** Row Key: Every read or write of data under single row key is atomic. Each row range is called Tablet. Select Row key to get good locality for data access.&lt;br /&gt;
** Column Key: Grouped into sets called Column Families. Forms basic unit of Access Control.All data stored is of same type.Syntax used: &#039;&#039;family:qualifier&#039;&#039;&lt;br /&gt;
** Time Stamp:Each cell consists of multiple versions of same data which are indexed by Timestamps.In order to avoid collisions, Timestamps need to be generated by applications.&lt;br /&gt;
* Big Table &#039;&#039;&#039;API&#039;&#039;&#039;: Provides functions for&lt;br /&gt;
** Creating and Deleting&lt;br /&gt;
*** Tables&lt;br /&gt;
*** Column Families&lt;br /&gt;
**Changing Cluster&lt;br /&gt;
**Changing Table&lt;br /&gt;
**Column Family metadata like Access Control Rights.&lt;br /&gt;
** Set of wrappers which allow Big Data to be used both as&lt;br /&gt;
*** Input source&lt;br /&gt;
***Output Target&lt;br /&gt;
&lt;br /&gt;
== Dynamo==&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
*Sacrifices consistency under certain failure scenarios.&lt;br /&gt;
*Treats failure handling as normal case without impact on availability and performance.&lt;br /&gt;
*Data is partitioned and replicated using consistent hashing and consistency is facilitated by use of object versioning.&lt;br /&gt;
&lt;br /&gt;
==Cassandra==&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
=Spanner=&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20060</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20060"/>
		<updated>2015-03-30T23:03:59Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
* Google System used for storing data of various Google Products, for instance Google Analytics, Google Finance, Orkut, Personalized Search, Writely, Google Earth and many more&lt;br /&gt;
* Big table is &lt;br /&gt;
** Sparse&lt;br /&gt;
** Persistant&lt;br /&gt;
** Muti dimensional Sorted Map&lt;br /&gt;
*It is indexed by&lt;br /&gt;
** Row Key: Every read or write of data under single row key is atomic. Each row range is called Tablet. Select Row key to get good locality for data access.&lt;br /&gt;
** Column Key: Grouped into sets called Column Families. Forms basic unit of Access Control.All data stored is of same type.Syntax used: &#039;&#039;family:qualifier&#039;&#039;&lt;br /&gt;
** Time Stamp:Each cell consists of multiple versions of same data which are indexed by Timestamps.In order to avoid collisions, Timestamps need to be generated by applications.&lt;br /&gt;
* Big Table &#039;&#039;&#039;API&#039;&#039;&#039;: Provides functions for&lt;br /&gt;
** Creating and Deleting&lt;br /&gt;
*** Tables&lt;br /&gt;
*** Column Families&lt;br /&gt;
**Changing Cluster&lt;br /&gt;
**Changing Table&lt;br /&gt;
**Column Family metadata like Access Control Rights.&lt;br /&gt;
** Set of wrappers which allow Big Data to be used both as&lt;br /&gt;
*** Input source&lt;br /&gt;
***Output Target&lt;br /&gt;
&lt;br /&gt;
== Dynamo==&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
==Cassandra==&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
=Spanner=&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20049</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20049"/>
		<updated>2015-03-28T18:49:47Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
* Google System used for storing data of various Google Products, for instance Google Analytics, Google Finance, Orkut, Personalized Search, Writely, Google Earth and many more&lt;br /&gt;
* Big table is &lt;br /&gt;
** Sparse&lt;br /&gt;
** Persistant&lt;br /&gt;
** Muti dimensional Sorted Map&lt;br /&gt;
*It is indexed by&lt;br /&gt;
** Row Key: Every read or write of data under single row key is atomic. Each row range is called Tablet. Select Row key to get good locality for data access.&lt;br /&gt;
** Column Key: Grouped into sets called Column Families. Forms basic unit of Access Control.All data stored is of same type.Syntax used: &#039;&#039;family:qualifier&#039;&#039;&lt;br /&gt;
** Time Stamp:Each cell consists of multiple versions of same data which are indexed by Timestamps.In order to avoid collisions, Timestamps need to be generated by applications.&lt;br /&gt;
* Big Table &#039;&#039;&#039;API&#039;&#039;&#039;: Provides functions for&lt;br /&gt;
** Creating and Deleting&lt;br /&gt;
*** Tables&lt;br /&gt;
*** Column Families&lt;br /&gt;
**Changing Cluster&lt;br /&gt;
**Changing Table&lt;br /&gt;
**Column Family metadata like Access Control Rights.&lt;br /&gt;
&lt;br /&gt;
== Dynamo==&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
==Cassandra==&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
=Spanner=&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20048</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20048"/>
		<updated>2015-03-28T18:48:41Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
* Google System used for storing data of various Google Products, for instance Google Analytics, Google Finance, Orkut, Personalized Search, Writely, Google Earth and many more&lt;br /&gt;
* Big table is &lt;br /&gt;
** Sparse&lt;br /&gt;
** Persistant&lt;br /&gt;
** Muti dimensional Sorted Map&lt;br /&gt;
*It is indexed by&lt;br /&gt;
** Row Key: Every read or write of data under single row key is atomic. Each row range is called Tablet. Select Row key to get good locality for data access.&lt;br /&gt;
** Column Key: Grouped into sets called Column Families. Forms basic unit of Access Control.All data stored is of same type.Syntax used: &#039;&#039;family:qualifier&#039;&#039;&lt;br /&gt;
** Time Stamp:Each cell consists of multiple versions of same data which are indexed by Timestamps.In order to avoid collisions, Timestamps need to be generated by applications.&lt;br /&gt;
* Big Table &#039;&#039;&#039;API&#039;&#039;&#039;: Provides functions for&lt;br /&gt;
** Creating and Deleting&lt;br /&gt;
*** Tables&lt;br /&gt;
*** Column Families&lt;br /&gt;
**&lt;br /&gt;
&lt;br /&gt;
== Dynamo==&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
==Cassandra==&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
=Spanner=&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20047</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20047"/>
		<updated>2015-03-28T18:46:18Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
* Google System used for storing data of various Google Products, for instance Google Analytics, Google Finance, Orkut, Personalized Search, Writely, Google Earth and many more&lt;br /&gt;
* Big table is &lt;br /&gt;
** Sparse&lt;br /&gt;
** Persistant&lt;br /&gt;
** Muti dimensional Sorted Map&lt;br /&gt;
*It is indexed by&lt;br /&gt;
** Row Key: Every read or write of data under single row key is atomic. Each row range is called Tablet. Select Row key to get good locality for data access.&lt;br /&gt;
** Column Key: Grouped into sets called Column Families. Forms basic unit of Access Control.All data stored is of same type.Syntax used: &#039;&#039;family:qualifier&#039;&#039;&lt;br /&gt;
** Time Stamp:Each cell consists of multiple versions of same data which are indexed by Timestamps.In order to avoid collisions, Timestamps need to be generated by applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Dynamo==&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
==Cassandra==&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
=Spanner=&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20046</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20046"/>
		<updated>2015-03-28T18:41:51Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
* Google System used for storing data of various Google Products, for instance Google Analytics, Google Finance, Orkut, Personalized Search, Writely, Google Earth and many more&lt;br /&gt;
* Big table is &lt;br /&gt;
** Sparse&lt;br /&gt;
** Persistant&lt;br /&gt;
** Muti dimensional Sorted Map&lt;br /&gt;
*It is indexed by&lt;br /&gt;
** Row Key: &lt;br /&gt;
Every read or write of data under single row key is atomic. &lt;br /&gt;
Each row range is called Tablet. &lt;br /&gt;
Select Row key to get good locality for data access.&lt;br /&gt;
** Column Key: &lt;br /&gt;
** Time Stamp&lt;br /&gt;
&lt;br /&gt;
== Dynamo==&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
==Cassandra==&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
=Spanner=&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20045</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20045"/>
		<updated>2015-03-28T18:40:55Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
* Google System used for storing data of various Google Products, for instance&lt;br /&gt;
** Google Analytics&lt;br /&gt;
** Google Finance&lt;br /&gt;
** Orkut&lt;br /&gt;
** Personalized Search&lt;br /&gt;
** Writely&lt;br /&gt;
**Google Earth&lt;br /&gt;
* Big table is &lt;br /&gt;
** Sparse&lt;br /&gt;
** Persistant&lt;br /&gt;
** Muti dimensional Sorted Map&lt;br /&gt;
*It is indexed by&lt;br /&gt;
** Row Key: &lt;br /&gt;
Every read or write of data under single row key is atomic. &lt;br /&gt;
Each row range is called Tablet. &lt;br /&gt;
Select Row key to get good locality for data access.&lt;br /&gt;
** Column Key: &lt;br /&gt;
** Time Stamp&lt;br /&gt;
&lt;br /&gt;
== Dynamo==&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
==Cassandra==&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
=Spanner=&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20044</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20044"/>
		<updated>2015-03-28T18:36:32Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
* Google System used for storing data of various Google Products, for instance&lt;br /&gt;
** Google Analytics&lt;br /&gt;
** Google Finance&lt;br /&gt;
** Orkut&lt;br /&gt;
** Personalized Search&lt;br /&gt;
** Writely&lt;br /&gt;
**Google Earth&lt;br /&gt;
&lt;br /&gt;
* Big table is &lt;br /&gt;
** Sparse&lt;br /&gt;
** Persistant&lt;br /&gt;
** Muti dimensional Sorted Map&lt;br /&gt;
*It is indexed by&lt;br /&gt;
** Row Key&lt;br /&gt;
** Column Key&lt;br /&gt;
** Time Stamp&lt;br /&gt;
== Dynamo==&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
==Cassandra==&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
=Spanner=&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20043</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20043"/>
		<updated>2015-03-28T18:34:47Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
* Google System used for storing data of various Google Products, for instance&lt;br /&gt;
** Google Analytics&lt;br /&gt;
** Google Finance&lt;br /&gt;
** Orkut&lt;br /&gt;
** Personalized Search&lt;br /&gt;
** Writely&lt;br /&gt;
**Google Earth&lt;br /&gt;
&lt;br /&gt;
* &lt;br /&gt;
== Dynamo==&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
==Cassandra==&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
=Spanner=&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20042</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20042"/>
		<updated>2015-03-28T18:34:08Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
* Google System used for storing data of various Google Products, for instance&lt;br /&gt;
** Google Analytics&lt;br /&gt;
** Google Finance&lt;br /&gt;
** Orkut&lt;br /&gt;
** Personalized Search&lt;br /&gt;
** Writely&lt;br /&gt;
**Google Earth&lt;br /&gt;
 &lt;br /&gt;
* Distributed Storage System for managing structured data, designed to scale &lt;br /&gt;
== Dynamo==&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
==Cassandra==&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
=Spanner=&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20041</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20041"/>
		<updated>2015-03-28T18:32:37Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
* Google Product used for storing data of&lt;br /&gt;
** Google Analytics&lt;br /&gt;
** Google Finance&lt;br /&gt;
** Orkut&lt;br /&gt;
 &lt;br /&gt;
* Distributed Storage System for managing structured data, designed to scale &lt;br /&gt;
== Dynamo==&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
==Cassandra==&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
=Spanner=&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=19969</id>
		<title>DistOS 2015W Session 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=19969"/>
		<updated>2015-03-10T00:42:08Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== BONIC ==&lt;br /&gt;
&lt;br /&gt;
*Public Resource Computing Platform&lt;br /&gt;
*Gives scientists the ability to use large amounts of computation resources.&lt;br /&gt;
*The clients do not connect directly with each other but instead they talk to a central server located at Berkley&lt;br /&gt;
*The goals of Bonic are: &lt;br /&gt;
:*1) reduce the barriers of entry&lt;br /&gt;
:*2) Share resources among autonomous projects&lt;br /&gt;
:*3) Support diverse applications&lt;br /&gt;
:*4) Reward participants.&lt;br /&gt;
 A BONIC application can be identified by a single master URL, &amp;lt;br/&amp;gt;which serves as the homepage as well as the directory of the servers.&lt;br /&gt;
&lt;br /&gt;
== SETI@Home ==&lt;br /&gt;
&lt;br /&gt;
*Uses public resource computing to analyze radio signals to find extraterrestrial intelligence&lt;br /&gt;
*Need good quality telescope to search for radio signals, and lots of computational power, which was unavailable locally&lt;br /&gt;
*It has not yet found extraterrestrial intelligence, but its has established credibility of public resource computing projects which are given by the public&lt;br /&gt;
*Uses BONIC as a backbone for the project&lt;br /&gt;
*Uses relational database to store information on a large scale, further it uses a multi-threaded server to distribute work to clients&lt;br /&gt;
&lt;br /&gt;
== MapReduce ==&lt;br /&gt;
&lt;br /&gt;
*A programming model presented by Google to do large scale parallel computations&lt;br /&gt;
*Uses the &amp;lt;code&amp;gt;Map()&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Reduce()&amp;lt;/code&amp;gt; functions from functional style programming languages&lt;br /&gt;
:*Map (Filtering)&lt;br /&gt;
::*Takes a function and applies it to all elements of the given data set&lt;br /&gt;
:*Reduce (Summary)&lt;br /&gt;
::*Accumulates results from the data set using a given function&lt;br /&gt;
&lt;br /&gt;
== Naiad ==&lt;br /&gt;
&lt;br /&gt;
*A programming model similar to &amp;lt;code&amp;gt;MapReduce&amp;lt;/code&amp;gt; but with streaming capabilities so that data results are almost instantaneous&lt;br /&gt;
*A distributed system for executing data parallel cyclic dataflow programs offering high throughput and low latency&lt;br /&gt;
*Aims to provide a general purpose system which will fulfill the requirements and the will also support wide variety of high level programming models.&lt;br /&gt;
*Real Time Applications:&lt;br /&gt;
:*Batch iterative Machine Learning: &lt;br /&gt;
VW, an open source distributed machine learning performs iteration in 3 phases: each process updates local state; processes independently training on local data; and process jointly performed global average which is All Reduce.&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=19962</id>
		<title>DistOS 2015W Session 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=19962"/>
		<updated>2015-03-10T00:16:35Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: Created page with &amp;quot;&amp;#039;&amp;#039;&amp;#039;BONIC&amp;#039;&amp;#039;&amp;#039; *Public Resource Computing Platform *Gives scientists the ability to use large amounts of computation resources. *The goals of Bonic are:  :*1) reduce the barriers...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;BONIC&#039;&#039;&#039;&lt;br /&gt;
*Public Resource Computing Platform&lt;br /&gt;
*Gives scientists the ability to use large amounts of computation resources.&lt;br /&gt;
*The goals of Bonic are: &lt;br /&gt;
:*1) reduce the barriers of entry&lt;br /&gt;
:*2) Share resources among autonomous projects&lt;br /&gt;
:*3)Support diverse applications&lt;br /&gt;
:*4)Reward participants.&lt;br /&gt;
 A BONIC application can be identified by a single master URL, which serves as the homepage as well as the directory of the servers.&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=19858</id>
		<title>DistOS 2015W Session 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=19858"/>
		<updated>2015-02-10T03:22:15Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
==Midterm==&lt;br /&gt;
 &lt;br /&gt;
The midterm from last year [http://homeostasis.scs.carleton.ca/~soma/distos/2015w/comp4000-2014w-midterm.pdf is now available].&lt;br /&gt;
&lt;br /&gt;
==Group 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Kirill, Jamie, Alexis, Veena, Khaled, Hassan&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! FARSITE&lt;br /&gt;
! OceanStore&lt;br /&gt;
|-&lt;br /&gt;
! Fault Tolerance&lt;br /&gt;
| Used Byzantine Fault Tolerance Algorithm - Did not manage well&lt;br /&gt;
| Used Byzantine Fault Tolerance Algorithm - Did not manage well&lt;br /&gt;
|-&lt;br /&gt;
! Cryptography&lt;br /&gt;
| Trusted Certificates&lt;br /&gt;
| A strong cryptographic algorithm on read-only operations&lt;br /&gt;
|-&lt;br /&gt;
! Implementation&lt;br /&gt;
| Did not mention what programming they used, but it was based on Windows. They did not implement the file system&lt;br /&gt;
| Implemented in JAVA&lt;br /&gt;
|-&lt;br /&gt;
! Scalability&lt;br /&gt;
| Scalable to a University or large corporations, maximum 10&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;&lt;br /&gt;
| Worldwide scalability, maximum 10&amp;lt;sup&amp;gt;10&amp;lt;/sup&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! File Usage&lt;br /&gt;
| Was designed for general purpose files&lt;br /&gt;
| Was designed for small file sizes&lt;br /&gt;
|-&lt;br /&gt;
! Scope&lt;br /&gt;
| All clients sharing the available resources&lt;br /&gt;
| Transient centralized service&lt;br /&gt;
|-&lt;br /&gt;
! Object Model&lt;br /&gt;
| Didn&#039;t use the object model&lt;br /&gt;
| Used the object model&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Group 2==&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Apoorv, Ambalica, Ashley, Eric, Mert, Shivjot&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot; class=&amp;quot;wikitable&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;&#039;&#039;&#039;Farsite&#039;&#039;&#039;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;&#039;&#039;&#039;OceanStore&#039;&#039;&#039;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Implemented Content Leases&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Update Model handled data consistency, no Leases&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Single tier, peer to peer model&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Two tier, server client model&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Scope of ten to the five&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Global Scope (ten to the ten)&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Cryptographic public, private key security&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Read and write privileges&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Randomized data replication&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Nomadic Data concept&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Group 3== &lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: DANY, MOE, DEEP, SAMEER, TROY&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
== FARSITE ==&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Security&#039;&#039;&#039; &lt;br /&gt;
•	Cascading certificates system through directory hierarchy &lt;br /&gt;
•	 Keys &lt;br /&gt;
•	Three types of certificates &lt;br /&gt;
•	CFS required to authorized certificate&lt;br /&gt;
•       Because directory groups only modify their shared state via a Byzantine-fault-tolerant protocol, we trust the group not to make &lt;br /&gt;
        an incorrect update to directory metadata. This metadata includes an access control list (ACL) of public keys of all users&lt;br /&gt;
        who are authorized writers to that directory and to files in it&lt;br /&gt;
•       Both file content and user-sensitive metadata (meaning file and directory names) are encrypted for privacy.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;System Architecture&#039;&#039;&#039; &lt;br /&gt;
•	Client Monitor, directory group, file host&lt;br /&gt;
•	When space runs out in directory group, delegate’s ownership to sub tree to other delegate group. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
== OCEANSTORE ==&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Security&#039;&#039;&#039; &lt;br /&gt;
•	GUID and ACLs used for write, encryption used for reads.&lt;br /&gt;
•       To prevent unauthorized reads, it encrypts&lt;br /&gt;
        all data in the system that is not completely public and distributes the encryption key to those users with read permission&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=19857</id>
		<title>DistOS 2015W Session 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=19857"/>
		<updated>2015-02-10T03:21:25Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: /* Group 3 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==Midterm==&lt;br /&gt;
 &lt;br /&gt;
The midterm from last year [http://homeostasis.scs.carleton.ca/~soma/distos/2015w/comp4000-2014w-midterm.pdf is now available].&lt;br /&gt;
&lt;br /&gt;
==Group 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Kirill, Jamie, Alexis, Veena, Khaled, Hassan&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! FARSITE&lt;br /&gt;
! OceanStore&lt;br /&gt;
|-&lt;br /&gt;
! Fault Tolerance&lt;br /&gt;
| Used Byzantine Fault Tolerance Algorithm - Did not manage well&lt;br /&gt;
| Used Byzantine Fault Tolerance Algorithm - Did not manage well&lt;br /&gt;
|-&lt;br /&gt;
! Cryptography&lt;br /&gt;
| Trusted Certificates&lt;br /&gt;
| A strong cryptographic algorithm on read-only operations&lt;br /&gt;
|-&lt;br /&gt;
! Implementation&lt;br /&gt;
| Did not mention what programming they used, but it was based on Windows. They did not implement the file system&lt;br /&gt;
| Implemented in JAVA&lt;br /&gt;
|-&lt;br /&gt;
! Scalability&lt;br /&gt;
| Scalable to a University or large corporations, maximum 10&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;&lt;br /&gt;
| Worldwide scalability, maximum 10&amp;lt;sup&amp;gt;10&amp;lt;/sup&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! File Usage&lt;br /&gt;
| Was designed for general purpose files&lt;br /&gt;
| Was designed for small file sizes&lt;br /&gt;
|-&lt;br /&gt;
! Scope&lt;br /&gt;
| All clients sharing the available resources&lt;br /&gt;
| Transient centralized service&lt;br /&gt;
|-&lt;br /&gt;
! Object Model&lt;br /&gt;
| Didn&#039;t use the object model&lt;br /&gt;
| Used the object model&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Group 2==&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Apoorv, Ambalica, Ashley, Eric, Mert, Shivjot&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot; class=&amp;quot;wikitable&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;&#039;&#039;&#039;Farsite&#039;&#039;&#039;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;&#039;&#039;&#039;OceanStore&#039;&#039;&#039;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Implemented Content Leases&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Update Model handled data consistency, no Leases&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Single tier, peer to peer model&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Two tier, server client model&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Scope of ten to the five&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Global Scope (ten to the ten)&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Cryptographic public, private key security&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Read and write privileges&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Randomized data replication&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Nomadic Data concept&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Group 3== &lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: DANY, MOE, DEEP, SAMEER, TROY&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&#039;&#039;&#039;FARSITE&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Security&#039;&#039;&#039; &lt;br /&gt;
•	Cascading certificates system through directory hierarchy &lt;br /&gt;
•	 Keys &lt;br /&gt;
•	Three types of certificates &lt;br /&gt;
•	CFS required to authorized certificate&lt;br /&gt;
•       Because directory groups only modify their shared state via a Byzantine-fault-tolerant protocol, we trust the group not to make &lt;br /&gt;
        an incorrect update to directory metadata. This metadata includes an access control list (ACL) of public keys of all users&lt;br /&gt;
        who are authorized writers to that directory and to files in it&lt;br /&gt;
•       Both file content and user-sensitive metadata (meaning file and directory names) are encrypted for privacy.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;System Architecture&#039;&#039;&#039; &lt;br /&gt;
•	Client Monitor, directory group, file host&lt;br /&gt;
•	When space runs out in directory group, delegate’s ownership to sub tree to other delegate group. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&#039;&#039;&#039;OCEANSTORE&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Security&#039;&#039;&#039; &lt;br /&gt;
•	GUID and ACLs used for write, encryption used for reads.&lt;br /&gt;
•       To prevent unauthorized reads, it encrypts&lt;br /&gt;
        all data in the system that is not completely public and distributes the encryption key to those users with read permission&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=19856</id>
		<title>DistOS 2015W Session 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=19856"/>
		<updated>2015-02-10T03:17:59Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==Midterm==&lt;br /&gt;
 &lt;br /&gt;
The midterm from last year [http://homeostasis.scs.carleton.ca/~soma/distos/2015w/comp4000-2014w-midterm.pdf is now available].&lt;br /&gt;
&lt;br /&gt;
==Group 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Kirill, Jamie, Alexis, Veena, Khaled, Hassan&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! FARSITE&lt;br /&gt;
! OceanStore&lt;br /&gt;
|-&lt;br /&gt;
! Fault Tolerance&lt;br /&gt;
| Used Byzantine Fault Tolerance Algorithm - Did not manage well&lt;br /&gt;
| Used Byzantine Fault Tolerance Algorithm - Did not manage well&lt;br /&gt;
|-&lt;br /&gt;
! Cryptography&lt;br /&gt;
| Trusted Certificates&lt;br /&gt;
| A strong cryptographic algorithm on read-only operations&lt;br /&gt;
|-&lt;br /&gt;
! Implementation&lt;br /&gt;
| Did not mention what programming they used, but it was based on Windows. They did not implement the file system&lt;br /&gt;
| Implemented in JAVA&lt;br /&gt;
|-&lt;br /&gt;
! Scalability&lt;br /&gt;
| Scalable to a University or large corporations, maximum 10&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;&lt;br /&gt;
| Worldwide scalability, maximum 10&amp;lt;sup&amp;gt;10&amp;lt;/sup&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! File Usage&lt;br /&gt;
| Was designed for general purpose files&lt;br /&gt;
| Was designed for small file sizes&lt;br /&gt;
|-&lt;br /&gt;
! Scope&lt;br /&gt;
| All clients sharing the available resources&lt;br /&gt;
| Transient centralized service&lt;br /&gt;
|-&lt;br /&gt;
! Object Model&lt;br /&gt;
| Didn&#039;t use the object model&lt;br /&gt;
| Used the object model&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Group 2==&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Apoorv, Ambalica, Ashley, Eric, Mert, Shivjot&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot; class=&amp;quot;wikitable&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;&#039;&#039;&#039;Farsite&#039;&#039;&#039;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;&#039;&#039;&#039;OceanStore&#039;&#039;&#039;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Implemented Content Leases&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Update Model handled data consistency, no Leases&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Single tier, peer to peer model&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Two tier, server client model&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Scope of ten to the five&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Global Scope (ten to the ten)&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Cryptographic public, private key security&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Read and write privileges&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Randomized data replication&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Nomadic Data concept&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Group 3== &lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: DANY, MOE, DEEP, SAMEER, TROY&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;FARSITE&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Security&#039;&#039;&#039; &lt;br /&gt;
•	Cascading certificates system through directory hierarchy &lt;br /&gt;
•	 Keys &lt;br /&gt;
•	Three types of certificates &lt;br /&gt;
•	CFS required to authorized certificate&lt;br /&gt;
•       Because directory groups only modify their shared state via a Byzantine-fault-tolerant protocol, we trust the group not to make &lt;br /&gt;
        an incorrect update to directory metadata. This metadata includes an access control list (ACL) of public keys of all users&lt;br /&gt;
        who are authorized writers to that directory and to files in it&lt;br /&gt;
•       Both file content and user-sensitive metadata (meaning file and directory names) are encrypted for privacy.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;System Architecture&#039;&#039;&#039; &lt;br /&gt;
•	Client Monitor, directory group, file host&lt;br /&gt;
•	When space runs out in directory group, delegate’s ownership to sub tree to other delegate group. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OCEANSTORE&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Security&#039;&#039;&#039; &lt;br /&gt;
•	GUID and ACLs used for write, encryption used for reads.&lt;br /&gt;
•       To prevent unauthorized reads, it encrypts&lt;br /&gt;
        all data in the system that is not completely public and distributes the encryption key to those users with read permission&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=19855</id>
		<title>DistOS 2015W Session 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=19855"/>
		<updated>2015-02-10T03:17:18Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==Midterm==&lt;br /&gt;
 &lt;br /&gt;
The midterm from last year [http://homeostasis.scs.carleton.ca/~soma/distos/2015w/comp4000-2014w-midterm.pdf is now available].&lt;br /&gt;
&lt;br /&gt;
==Group 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Kirill, Jamie, Alexis, Veena, Khaled, Hassan&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! FARSITE&lt;br /&gt;
! OceanStore&lt;br /&gt;
|-&lt;br /&gt;
! Fault Tolerance&lt;br /&gt;
| Used Byzantine Fault Tolerance Algorithm - Did not manage well&lt;br /&gt;
| Used Byzantine Fault Tolerance Algorithm - Did not manage well&lt;br /&gt;
|-&lt;br /&gt;
! Cryptography&lt;br /&gt;
| Trusted Certificates&lt;br /&gt;
| A strong cryptographic algorithm on read-only operations&lt;br /&gt;
|-&lt;br /&gt;
! Implementation&lt;br /&gt;
| Did not mention what programming they used, but it was based on Windows. They did not implement the file system&lt;br /&gt;
| Implemented in JAVA&lt;br /&gt;
|-&lt;br /&gt;
! Scalability&lt;br /&gt;
| Scalable to a University or large corporations, maximum 10&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;&lt;br /&gt;
| Worldwide scalability, maximum 10&amp;lt;sup&amp;gt;10&amp;lt;/sup&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! File Usage&lt;br /&gt;
| Was designed for general purpose files&lt;br /&gt;
| Was designed for small file sizes&lt;br /&gt;
|-&lt;br /&gt;
! Scope&lt;br /&gt;
| All clients sharing the available resources&lt;br /&gt;
| Transient centralized service&lt;br /&gt;
|-&lt;br /&gt;
! Object Model&lt;br /&gt;
| Didn&#039;t use the object model&lt;br /&gt;
| Used the object model&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Group 2==&lt;br /&gt;
Team Members: Apoorv, Ambalica, Ashley, Eric, Mert, Shivjot&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot; class=&amp;quot;wikitable&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;&#039;&#039;&#039;Farsite&#039;&#039;&#039;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;&#039;&#039;&#039;OceanStore&#039;&#039;&#039;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Implemented Content Leases&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Update Model handled data consistency, no Leases&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Single tier, peer to peer model&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Two tier, server client model&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Scope of ten to the five&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Global Scope (ten to the ten)&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Cryptographic public, private key security&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Read and write privileges&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Randomized data replication&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Nomadic Data concept&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Group 3== &lt;br /&gt;
DANY, MOE, DEEP, SAMEER, TROY&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;FARSITE&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Security&#039;&#039;&#039; &lt;br /&gt;
•	Cascading certificates system through directory hierarchy &lt;br /&gt;
•	 Keys &lt;br /&gt;
•	Three types of certificates &lt;br /&gt;
•	CFS required to authorized certificate&lt;br /&gt;
•       Because directory groups only modify their shared state via a Byzantine-fault-tolerant protocol, we trust the group not to make &lt;br /&gt;
        an incorrect update to directory metadata. This metadata includes an access control list (ACL) of public keys of all users&lt;br /&gt;
        who are authorized writers to that directory and to files in it&lt;br /&gt;
•       Both file content and user-sensitive metadata (meaning file and directory names) are encrypted for privacy.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;System Architecture&#039;&#039;&#039; &lt;br /&gt;
•	Client Monitor, directory group, file host&lt;br /&gt;
•	When space runs out in directory group, delegate’s ownership to sub tree to other delegate group. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OCEANSTORE&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Security&#039;&#039;&#039; &lt;br /&gt;
•	GUID and ACLs used for write, encryption used for reads.&lt;br /&gt;
•       To prevent unauthorized reads, it encrypts&lt;br /&gt;
        all data in the system that is not completely public and distributes the encryption key to those users with read permission&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=19845</id>
		<title>DistOS 2015W Session 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=19845"/>
		<updated>2015-02-09T20:52:44Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==Midterm==&lt;br /&gt;
 &lt;br /&gt;
The midterm from last year [http://homeostasis.scs.carleton.ca/~soma/distos/2015w/comp4000-2014w-midterm.pdf is now available].&lt;br /&gt;
&lt;br /&gt;
==Group 1==&lt;br /&gt;
&lt;br /&gt;
==Group 2==&lt;br /&gt;
Team Members: Apoorv, Ambalica, Ashley, Eric, Mert, Shivjot&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Group 3==&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=19765</id>
		<title>DistOS 2015W Session 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=19765"/>
		<updated>2015-02-03T04:55:08Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== &#039;&#039;&#039;Cloud Distributed Operating System&#039;&#039;&#039;==&lt;br /&gt;
 It is a distributed OS running on a set of computers that are interconnected by a group of network. It basically unifies different computers into a single component.&lt;br /&gt;
The OS is based on 2 patterns:&lt;br /&gt;
1. Message Based OS&lt;br /&gt;
2. Object Based  OS&lt;br /&gt;
&lt;br /&gt;
The structure of this is based on &#039;&#039;&#039;Object Thread Model.&#039;&#039;&#039; &lt;br /&gt;
It has set of objects which are defined by the class. Objects respond to messages. &lt;br /&gt;
Sending message to object causes object to execute the method and then reply back. &lt;br /&gt;
&lt;br /&gt;
It has Active Objects and Passive objects&lt;br /&gt;
&lt;br /&gt;
1.&#039;&#039;&#039;Active Objects&#039;&#039;&#039; are the objects which have one or more processes associated with them and further they can communicate with the external environment. &lt;br /&gt;
2.&#039;&#039;&#039;Passive Objects&#039;&#039;&#039; are the object which have no processes in them.&lt;br /&gt;
&lt;br /&gt;
The contents of the Cloud are long lived. They exist forever and can survive system crashes and shut downs.&lt;br /&gt;
&lt;br /&gt;
Another important part of Cloud DOS are &#039;&#039;&#039;threads&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The threads are the logical path of execution that traverse objects and executes code in them. &lt;br /&gt;
&lt;br /&gt;
Note: The cloud thread is not bound to a single address space. Several threads can enter an object simultaneously and execute concurrently.&lt;br /&gt;
&lt;br /&gt;
The nature of the Cloud object prohibits a thread from accessing any data outside the current address space in which it is executing.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Interaction between &#039;&#039;&#039;Objects&#039;&#039;&#039; and &#039;&#039;&#039;Threads&#039;&#039;&#039;&lt;br /&gt;
1)Inter object interfaces are procedural&lt;br /&gt;
2)Invocations work across machine boundaries&lt;br /&gt;
3)Objects in cloud unify concept of persistent storage and memory to create address space, thus making the programming simpler.&lt;br /&gt;
4)Control flow achieved by threads invoking objects.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cloud Environment&#039;&#039;&#039;&lt;br /&gt;
1) Integrates set of homogeneous machines into one seamless environment&lt;br /&gt;
2) There are three logical categories of machines- Compute Server, User Workstation and Data server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Plan 9&#039;&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Plan 9 is a general purpose, multiuser and mobile computing environment physically distributed across machines. &lt;br /&gt;
The Plan 9 began in late 1980s. The aims of this system are:&lt;br /&gt;
1) To built a system that should be centrally administered &lt;br /&gt;
2) Cost effective using cheap modern microcomputers. &lt;br /&gt;
The distribution itself is transparent to most programs.&lt;br /&gt;
This property is made possible by 2 properties:&lt;br /&gt;
1) A per process group name space&lt;br /&gt;
2) A uniform access to all the resources by representing them as a file.&lt;br /&gt;
&lt;br /&gt;
It is quite similar to the Unix yet so different. The commands, libraries and system calls are similar to that of Unix and therefore a casual user cannot distinguish between these two. The problems in UNIX were too deep to fix but still the various ideas were brought along. The problems addressed badly by UNIX were improved. Old tools were dropped and others were polished and reused.&lt;br /&gt;
&lt;br /&gt;
What actually distinguishes Plan 9 is its &#039;&#039;&#039;organization&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Plan 9 is divided along the lines of service function. &lt;br /&gt;
* CPU services and terminals use same kernel&lt;br /&gt;
* Users may choose to run programs locally or remotely on CPU servers&lt;br /&gt;
*Gives the user a choice to choose whether they want distributed or centralized.&lt;br /&gt;
&lt;br /&gt;
The design of Plan 9 is based on 3 principles:&lt;br /&gt;
1) Resources are named and accessed like files in hierarchical file system.&lt;br /&gt;
2) Standard protocol 9P&lt;br /&gt;
3) Disjoint hierarchical provided by different services are joined together into single private hierarchal file name space.&lt;br /&gt;
&lt;br /&gt;
Another concept in Plan 9 is the &#039;&#039;&#039;Virtual Name Space&#039;&#039;&#039;&lt;br /&gt;
In a &#039;&#039;&#039;Virtual Name Space&#039;&#039;&#039;, a user boots a terminal or connects to a CPU server and then a new process group is created. &lt;br /&gt;
Processes in group can either add to or rearrange their name space using two system calls- [[Mount]] and [[Bind]]&lt;br /&gt;
* &#039;&#039;&#039;Mount&#039;&#039;&#039; is used to attach new file system to a point in name space.&lt;br /&gt;
*&#039;&#039;&#039;Bind&#039;&#039;&#039; is used to attach a kernel resident file system to name space and also arrange pieces of name space.&lt;br /&gt;
&lt;br /&gt;
The plan 9 provides mechanism to customize one&#039;s view of the system with the help of the software rather than the hardware.&lt;br /&gt;
It is built for the traditional system but it can be extended to the other resources. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Parallel Programming&#039;&#039;&#039;&lt;br /&gt;
The parallel programming has two aspects:&lt;br /&gt;
* Kernel provides simple process model and carefully designed system calls for synchronization.&lt;br /&gt;
*Programming language supports concurrent programming.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Implementation of Name Spaces&#039;&#039;&#039;&lt;br /&gt;
User processes construct name specs using three system calls- mount, bind, unmount.&lt;br /&gt;
Mount- System call attaches a tree served by a file server to the current name specs&lt;br /&gt;
Bind-Duplicates pieces of existing name specs at another point&lt;br /&gt;
Unmount- Allows components to be removed.&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Google File System&#039;&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
It is scalable file system for large distributed data intensive applications. The design is driven by providing previous applications workloads and technical environments, both current and anticipated. &lt;br /&gt;
&lt;br /&gt;
The architecture of the google file system consists of a single master and multiple clients. Files are divided into fixed size chunks. Each chunk is identified by globally unique 64 bit chunk handle assigned by master at the end of the time of chunk creation. The master maintains all the file system meta data which include the name space and also the access control information.&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=19764</id>
		<title>DistOS 2015W Session 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=19764"/>
		<updated>2015-02-03T04:43:08Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== &#039;&#039;&#039;Cloud Distributed Operating System&#039;&#039;&#039;==&lt;br /&gt;
 It is a distributed OS running on a set of computers that are interconnected by a group of network. It basically unifies different computers into a single component.&lt;br /&gt;
The OS is based on 2 patterns:&lt;br /&gt;
1. Message Based OS&lt;br /&gt;
2. Object Based  OS&lt;br /&gt;
&lt;br /&gt;
The structure of this is based on &#039;&#039;&#039;Object Thread Model.&#039;&#039;&#039; &lt;br /&gt;
It has set of objects which are defined by the class. Objects respond to messages. &lt;br /&gt;
Sending message to object causes object to execute the method and then reply back. &lt;br /&gt;
&lt;br /&gt;
It has Active Objects and Passive objects&lt;br /&gt;
&lt;br /&gt;
1.&#039;&#039;&#039;Active Objects&#039;&#039;&#039; are the objects which have one or more processes associated with them and further they can communicate with the external environment. &lt;br /&gt;
2.&#039;&#039;&#039;Passive Objects&#039;&#039;&#039; are the object which have no processes in them.&lt;br /&gt;
&lt;br /&gt;
The contents of the Cloud are long lived. They exist forever and can survive system crashes and shut downs.&lt;br /&gt;
&lt;br /&gt;
Another important part of Cloud DOS are &#039;&#039;&#039;threads&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The threads are the logical path of execution that traverse objects and executes code in them. &lt;br /&gt;
&lt;br /&gt;
Note: The cloud thread is not bound to a single address space. Several threads can enter an object simultaneously and execute concurrently.&lt;br /&gt;
&lt;br /&gt;
The nature of the Cloud object prohibits a thread from accessing any data outside the current address space in which it is executing.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Interaction between &#039;&#039;&#039;Objects&#039;&#039;&#039; and &#039;&#039;&#039;Threads&#039;&#039;&#039;&lt;br /&gt;
1)Inter object interfaces are procedural&lt;br /&gt;
2)Invocations work across machine boundaries&lt;br /&gt;
3)Objects in cloud unify concept of persistent storage and memory to create address space, thus making the programming simpler.&lt;br /&gt;
4)Control flow achieved by threads invoking objects.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cloud Environment&#039;&#039;&#039;&lt;br /&gt;
1) Integrates set of homogeneous machines into one seamless environment&lt;br /&gt;
2) There are three logical categories of machines- Compute Server, User Workstation and Data server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&#039;&#039;&#039;Plan 9&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Plan 9 is a general purpose, multiuser and mobile computing environment physically distributed across machines. &lt;br /&gt;
The Plan 9 began in late 1980s. The aims of this system are:&lt;br /&gt;
1) To built a system that should be centrally administered &lt;br /&gt;
2) Cost effective using cheap modern microcomputers. &lt;br /&gt;
The distribution itself is transparent to most programs.&lt;br /&gt;
This property is made possible by 2 properties:&lt;br /&gt;
1) A per process group name space&lt;br /&gt;
2) A uniform access to all the resources by representing them as a file.&lt;br /&gt;
&lt;br /&gt;
It is quite similar to the Unix yet so different. The commands, libraries and system calls are similar to that of Unix and therefore a casual user cannot distinguish between these two. The problems in UNIX were too deep to fix but still the various ideas were brought along. The problems addressed badly by UNIX were improved. Old tools were dropped and others were polished and reused.&lt;br /&gt;
&lt;br /&gt;
What actually distinguishes Plan 9 is its &#039;&#039;&#039;organization&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Plan 9 is divided along the lines of service function. &lt;br /&gt;
* CPU services and terminals use same kernel&lt;br /&gt;
* Users may choose to run programs locally or remotely on CPU servers&lt;br /&gt;
*Gives the user a choice to choose whether they want distributed or centralized.&lt;br /&gt;
&lt;br /&gt;
The design of Plan 9 is based on 3 principles:&lt;br /&gt;
1) Resources are named and accessed like files in hierarchical file system.&lt;br /&gt;
2) Standard protocol 9P&lt;br /&gt;
3) Disjoint hierarchical provided by different services are joined together into single private hierarchal file name space.&lt;br /&gt;
&lt;br /&gt;
Another concept in Plan 9 is the &#039;&#039;&#039;Virtual Name Space&#039;&#039;&#039;&lt;br /&gt;
In a &#039;&#039;&#039;Virtual Name Space&#039;&#039;&#039;, a user boots a terminal or connects to a CPU server and then a new process group is created. &lt;br /&gt;
Processes in group can either add to or rearrange their name space using two system calls- [[Mount]] and [[Bind]]&lt;br /&gt;
* &#039;&#039;&#039;Mount&#039;&#039;&#039; is used to attach new file system to a point in name space.&lt;br /&gt;
*&#039;&#039;&#039;Bind&#039;&#039;&#039; is used to attach a kernel resident file system to name space and also arrange pieces of name space.&lt;br /&gt;
&lt;br /&gt;
The plan 9 provides mechanism to customize one&#039;s view of the system with the help of the software rather than the hardware.&lt;br /&gt;
It is built for the traditional system but it can be extended to the other resources. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Parallel Programming&#039;&#039;&#039;&lt;br /&gt;
The parallel programming has two aspects:&lt;br /&gt;
* Kernel provides simple process model and carefully designed system calls for synchronization.&lt;br /&gt;
*Programming language supports concurrent programming.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Implementation of Name Spaces&#039;&#039;&#039;&lt;br /&gt;
User processes construct name specs using three system calls- mount, bind, unmount.&lt;br /&gt;
&lt;br /&gt;
----&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=19763</id>
		<title>DistOS 2015W Session 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=19763"/>
		<updated>2015-02-03T03:34:26Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Cloud Distributed Operating System&#039;&#039;&#039;&lt;br /&gt;
 It is a distributed OS running on a set of computers that are interconnected by a group of network. It basically unifies different computers into a single component.&lt;br /&gt;
The OS is based on 2 patterns:&lt;br /&gt;
1. Message Based OS&lt;br /&gt;
2. Object Based  OS&lt;br /&gt;
&lt;br /&gt;
The structure of this is based on &#039;&#039;&#039;Object Thread Model.&#039;&#039;&#039; &lt;br /&gt;
It has set of objects which are defined by the class. Objects respond to messages. &lt;br /&gt;
Sending message to object causes object to execute the method and then reply back. &lt;br /&gt;
&lt;br /&gt;
It has Active Objects and Passive objects&lt;br /&gt;
&lt;br /&gt;
1.&#039;&#039;&#039;Active Objects&#039;&#039;&#039; are the objects which have one or more processes associated with them and further they can communicate with the external environment. &lt;br /&gt;
2.&#039;&#039;&#039;Passive Objects&#039;&#039;&#039; are the object which have no processes in them.&lt;br /&gt;
&lt;br /&gt;
The contents of the Cloud are long lived. They exist forever and can survive system crashes and shut downs.&lt;br /&gt;
&lt;br /&gt;
Another important part of Cloud DOS are &#039;&#039;&#039;threads&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The threads are the logical path of execution that traverse objects and executes code in them. &lt;br /&gt;
&lt;br /&gt;
Note: The cloud thread is not bound to a single address space. Several threads can enter an object simultaneously and execute concurrently.&lt;br /&gt;
&lt;br /&gt;
The nature of the Cloud object prohibits a thread from accessing any data outside the current address space in which it is executing.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Interaction between &#039;&#039;&#039;Objects&#039;&#039;&#039; and &#039;&#039;&#039;Threads&#039;&#039;&#039;&lt;br /&gt;
1)Inter object interfaces are procedural&lt;br /&gt;
2)Invocations work across machine boundaries&lt;br /&gt;
3)Objects in cloud unify concept of persistent storage and memory to create address space, thus making the programming simpler.&lt;br /&gt;
4)Control flow achieved by threads invoking objects.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cloud Environment&#039;&#039;&#039;&lt;br /&gt;
1) Integrates set of homogeneous machines into one seamless environment&lt;br /&gt;
2) There are three logical categories of machines- Compute Server, User Workstation and Data server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&#039;&#039;&#039;Plan 9&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Plan 9 is a general purpose, multiuser and mobile computing environment physically distributed across machines. &lt;br /&gt;
The Plan 9 began in late 1980s. The aims of this system are:&lt;br /&gt;
1) To built a system that should be centrally administered &lt;br /&gt;
2) Cost effective using cheap modern microcomputers. &lt;br /&gt;
The distribution itself is transparent to most programs.&lt;br /&gt;
This property is made possible by 2 properties:&lt;br /&gt;
1) A per process group name space&lt;br /&gt;
2) A uniform access to all the resources by representing them as a file.&lt;br /&gt;
&lt;br /&gt;
It is quite similar to the Unix yet so different. The commands, libraries and system calls are similar to that of Unix and therefore a casual user cannot distinguish between these two. The problems in UNIX were too deep to fix but still the various ideas were brought along.&lt;br /&gt;
&lt;br /&gt;
What actually distinguishes Plan 9 is its &#039;&#039;&#039;organization&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Plan 9 is divided along the lines of service function. &lt;br /&gt;
* CPU services and terminals use same kernel&lt;br /&gt;
* Users may choose to run programs locally or remotely on CPU servers&lt;br /&gt;
*Gives the user a choice to choose whether they want distributed or centralized.&lt;br /&gt;
&lt;br /&gt;
The design of Plan 9 is based on 3 principles:&lt;br /&gt;
1) Resources are named and accessed like files in hierarchical file system.&lt;br /&gt;
2) Standard protocol 9P&lt;br /&gt;
3) Disjoint hierarchical provided by different services are joined together into single private hierarchal file name space.&lt;br /&gt;
&lt;br /&gt;
Another concept in Plan 9 is the &#039;&#039;&#039;Virtual Name Space&#039;&#039;&#039;&lt;br /&gt;
In a &#039;&#039;&#039;Virtual Name Space&#039;&#039;&#039;, a user boots a terminal or connects to a CPU server and then a new process group is created. &lt;br /&gt;
Processes in group can either add to or rearrange their name space using two system calls- [[Mount]] and [[Bind]]&lt;br /&gt;
* &#039;&#039;&#039;Mount&#039;&#039;&#039; is used to attach new file system to a point in name space.&lt;br /&gt;
*&#039;&#039;&#039;Bind&#039;&#039;&#039; is used to attach a kernel resident file system to name space and also arrange pieces of name space.&lt;br /&gt;
----&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=19762</id>
		<title>DistOS 2015W Session 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=19762"/>
		<updated>2015-02-03T03:24:28Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;[[Cloud Distributed Operating System]]&#039;&#039;&#039;&lt;br /&gt;
 It is a distributed OS running on a set of computers that are interconnected by a group of network. It basically unifies different computers into a single component.&lt;br /&gt;
The OS is based on 2 patterns:&lt;br /&gt;
1. Message Based OS&lt;br /&gt;
2. Object Based  OS&lt;br /&gt;
&lt;br /&gt;
The structure of this is based on &#039;&#039;&#039;Object Thread Model.&#039;&#039;&#039; &lt;br /&gt;
It has set of objects which are defined by the class. Objects respond to messages. &lt;br /&gt;
Sending message to object causes object to execute the method and then reply back. &lt;br /&gt;
&lt;br /&gt;
It has [[Active]] Objects and [[Passive]] objects&lt;br /&gt;
&lt;br /&gt;
1.&#039;&#039;&#039;Active Objects&#039;&#039;&#039; are the objects which have one or more processes associated with them and further they can communicate with the external environment. &lt;br /&gt;
2.&#039;&#039;&#039;Passive Objects&#039;&#039;&#039; are the object which have no processes in them.&lt;br /&gt;
&lt;br /&gt;
The contents of the Cloud are long lived. They exist forever and can survive system crashes and shut downs.&lt;br /&gt;
&lt;br /&gt;
Another important part of Cloud DOS are &#039;&#039;&#039;threads&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The [[threads]] are the logical path of execution that traverse objects and executes code in them. &lt;br /&gt;
&lt;br /&gt;
Note: The cloud thread is not bound to a single address space. Several threads can enter an object simultaneously and execute concurrently.&lt;br /&gt;
&lt;br /&gt;
The nature of the Cloud object prohibits a thread from accessing any data outside the current address space in which it is executing.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Interaction between &#039;&#039;&#039;Objects&#039;&#039;&#039; and &#039;&#039;&#039;Threads&#039;&#039;&#039;]]&lt;br /&gt;
1)Interobject interfaces are procedural&lt;br /&gt;
2)Invocations work across machine boundaries&lt;br /&gt;
3)Objects in cloud unify concept of persistent storage and memory to create address space, thus making the programming simpler.&lt;br /&gt;
4)Control flow achieved by threads invoking objects.&lt;br /&gt;
&lt;br /&gt;
[[&#039;&#039;&#039;Cloud Environment&#039;&#039;&#039;]]&lt;br /&gt;
1) Integrates set of homogeneous machines into one seamless environment&lt;br /&gt;
2) There are three logical categories of machines- Compute Server, User Workstation and Data server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[&#039;&#039;&#039;Plan 9&#039;&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
Plan 9 is a general purpose, multiuser and mobile computing environment physically distributed across machines. &lt;br /&gt;
The Plan 9 began in late 1980s. The aims of this system are:&lt;br /&gt;
1) To built a system that should be centrally administered &lt;br /&gt;
2) Cost effective using cheap modern microcomputers. &lt;br /&gt;
The distribution itself is transparent to most programs.&lt;br /&gt;
This property is made possible by 2 properties:&lt;br /&gt;
1) A per process group name space&lt;br /&gt;
2) A uniform access to all the resources by representing them as a file.&lt;br /&gt;
&lt;br /&gt;
It is quite similar to the Unix yet so different. The commands, libraries and system calls are similar to that of Unix and therefore a casual user cannot distinguish between these two. The problems in UNIX were too deep to fix but still the various ideas were brought along.&lt;br /&gt;
&lt;br /&gt;
What actually distinguishes Plan 9 is its &#039;&#039;&#039;organization&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Plan 9 is divided along the lines of service function. &lt;br /&gt;
* CPU services and terminals use same kernel&lt;br /&gt;
* Users may choose to run programs locally or remotely on CPU servers&lt;br /&gt;
*Gives the user a choice to choose whether they want distributed or centralized.&lt;br /&gt;
&lt;br /&gt;
The design of Plan 9 is based on 3 principles:&lt;br /&gt;
1) Resources are named and accessed like files in hierarchical file system.&lt;br /&gt;
2) Standard protocol 9P&lt;br /&gt;
3) Disjoint hierarchical provided by different services are joined together into single private hierarchal file name space.&lt;br /&gt;
&lt;br /&gt;
Another concept in Plan 9 is the &#039;&#039;&#039;Virtual Name Space&#039;&#039;&#039;&lt;br /&gt;
In a [[Virtual Name Space]], a user boots a terminal or connects to a CPU server and then a new process group is created. &lt;br /&gt;
Processes in group can either add to or rearrange their name space using two system calls- [[Mount]] and [[Bind]]&lt;br /&gt;
* [[Mount]] is used to attach new file system to a point in name space.&lt;br /&gt;
*[[Bind]] is used to attach a kernel resident file system to name space and also arrange pieces of name space.&lt;br /&gt;
----&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=19761</id>
		<title>DistOS 2015W Session 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=19761"/>
		<updated>2015-02-03T03:18:03Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;[[Cloud Distributed Operating System]]&#039;&#039;&#039;&lt;br /&gt;
 It is a distributed OS running on a set of computers that are interconnected by a group of network. It basically unifies different computers into a single component.&lt;br /&gt;
The OS is based on 2 patterns:&lt;br /&gt;
1. Message Based OS&lt;br /&gt;
2. Object Based  OS&lt;br /&gt;
&lt;br /&gt;
The structure of this is based on &#039;&#039;&#039;Object Thread Model.&#039;&#039;&#039; &lt;br /&gt;
It has set of objects which are defined by the class. Objects respond to messages. &lt;br /&gt;
Sending message to object causes object to execute the method and then reply back. &lt;br /&gt;
&lt;br /&gt;
It has [[Active]] Objects and [[Passive]] objects&lt;br /&gt;
&lt;br /&gt;
1.&#039;&#039;&#039;Active Objects&#039;&#039;&#039; are the objects which have one or more processes associated with them and further they can communicate with the external environment. &lt;br /&gt;
2.&#039;&#039;&#039;Passive Objects&#039;&#039;&#039; are the object which have no processes in them.&lt;br /&gt;
&lt;br /&gt;
The contents of the Cloud are long lived. They exist forever and can survive system crashes and shut downs.&lt;br /&gt;
&lt;br /&gt;
Another important part of Cloud DOS are &#039;&#039;&#039;threads&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The [[threads]] are the logical path of execution that traverse objects and executes code in them. &lt;br /&gt;
&lt;br /&gt;
Note: The cloud thread is not bound to a single address space. Several threads can enter an object simultaneously and execute concurrently.&lt;br /&gt;
&lt;br /&gt;
The nature of the Cloud object prohibits a thread from accessing any data outside the current address space in which it is executing.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Interaction between &#039;&#039;&#039;Objects&#039;&#039;&#039; and &#039;&#039;&#039;Threads&#039;&#039;&#039;]]&lt;br /&gt;
1)Interobject interfaces are procedural&lt;br /&gt;
2)Invocations work across machine boundaries&lt;br /&gt;
3)Objects in cloud unify concept of persistent storage and memory to create address space, thus making the programming simpler.&lt;br /&gt;
4)Control flow achieved by threads invoking objects.&lt;br /&gt;
&lt;br /&gt;
[[&#039;&#039;&#039;Cloud Environment&#039;&#039;&#039;]]&lt;br /&gt;
1) Integrates set of homogeneous machines into one seamless environment&lt;br /&gt;
2) There are three logical categories of machines- Compute Server, User Workstation and Data server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[&#039;&#039;&#039;Plan 9&#039;&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
Plan 9 is a general purpose, multiuser and mobile computing environment physically distributed across machines. &lt;br /&gt;
The Plan 9 began in late 1980s. &lt;br /&gt;
The distribution itself is transparent to most programs.&lt;br /&gt;
This property is made possible by 2 properties:&lt;br /&gt;
1) A per process group name space&lt;br /&gt;
2) A uniform access to all the resources by representing them as a file.&lt;br /&gt;
&lt;br /&gt;
It is quite similar to the Unix yet so different. The commands, libraries and system calls are similar to that of Unix and therefore a casual user cannot distinguish between these two. &lt;br /&gt;
&lt;br /&gt;
What actually distinguishes Plan 9 is its &#039;&#039;&#039;organization&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Plan 9 is divided along the lines of service function. &lt;br /&gt;
* CPU services and terminals use same kernel&lt;br /&gt;
* Users may choose to run programs locally or remotely on CPU servers&lt;br /&gt;
*Gives the user a choice to choose whether they want distributed or centralized.&lt;br /&gt;
&lt;br /&gt;
Another concept in Plan 9 is the &#039;&#039;&#039;Virtual Name Space&#039;&#039;&#039;&lt;br /&gt;
In a [[Virtual Name Space]], a user boots a terminal or connects to a CPU server and then a new process group is created. &lt;br /&gt;
Processes in group can either add to or rearrange their name space using two system calls- [[Mount]] and [[Bind]]&lt;br /&gt;
* [[Mount]] is used to attach new file system to a point in name space.&lt;br /&gt;
*[[Bind]] is used to attach a kernel resident file system to name space and also arrange pieces of name space.&lt;br /&gt;
----&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=19760</id>
		<title>DistOS 2015W Session 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=19760"/>
		<updated>2015-02-03T02:45:06Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: Created page with &amp;quot;&amp;#039;&amp;#039;&amp;#039;Cloud Distributed Operating System&amp;#039;&amp;#039;&amp;#039;  It is a distributed OS running on a set of computers that are interconnected by a group of network. It basically unifies differen...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;[[Cloud Distributed Operating System]]&#039;&#039;&#039;&lt;br /&gt;
 It is a distributed OS running on a set of computers that are interconnected by a group of network. It basically unifies different computers into a single component.&lt;br /&gt;
The OS is based on 2 patterns:&lt;br /&gt;
1. Message Based OS&lt;br /&gt;
2. Object Based  OS&lt;br /&gt;
&lt;br /&gt;
The structure of this is based on &#039;&#039;&#039;Object Thread Model.&#039;&#039;&#039; &lt;br /&gt;
It has set of objects which are defined by the class. Objects respond to messages. &lt;br /&gt;
Sending message to object causes object to execute the method and then reply back. &lt;br /&gt;
&lt;br /&gt;
It has [[Active]] Objects and [[Passive]] objects&lt;br /&gt;
&lt;br /&gt;
1.&#039;&#039;&#039;Active Objects&#039;&#039;&#039; are the objects which have one or more processes associated with them and further they can communicate with the external environment. &lt;br /&gt;
2.&#039;&#039;&#039;Passive Objects&#039;&#039;&#039; are the object which have no processes in them.&lt;br /&gt;
&lt;br /&gt;
The contents of the Cloud are long lived. They exist forever and can survive system crashes and shut downs.&lt;br /&gt;
&lt;br /&gt;
Another important part of Cloud DOS are &#039;&#039;&#039;threads&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The [[threads]] are the logical path of execution that traverse objects and executes code in them. &lt;br /&gt;
&lt;br /&gt;
Note: The cloud thread is not bound to a single address space. Several threads can enter an object simultaneously and execute concurrently.&lt;br /&gt;
&lt;br /&gt;
The nature of the Cloud object prohibits a thread from accessing any data outside the current address space in which it is executing.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Interaction between &#039;&#039;&#039;Objects&#039;&#039;&#039; and &#039;&#039;&#039;Threads&#039;&#039;&#039;]]&lt;br /&gt;
1)Interobject interfaces are procedural&lt;br /&gt;
2)Invocations work across machine boundaries&lt;br /&gt;
3)Objects in cloud unify concept of persistent storage and memory to create address space, thus making the programming simpler.&lt;br /&gt;
4)Control flow achieved by threads invoking objects.&lt;br /&gt;
&lt;br /&gt;
[[&#039;&#039;&#039;Cloud Environment&#039;&#039;&#039;]]&lt;br /&gt;
1) Integrates set of homogeneous machines into one seamless environment&lt;br /&gt;
2) There are three logical categories of machines- Compute Server, User Workstation and Data server.&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=19684</id>
		<title>DistOS 2015W Session 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=19684"/>
		<updated>2015-01-20T03:48:58Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Reading Response Discussion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&#039;&#039;&#039;Multics&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Team: Sameer, Shivjot, Ambalica, Veena&lt;br /&gt;
&lt;br /&gt;
It came into being in the 1960s and it completely vanished in 2000s. It was started by Bell and MIT but Bell backed out of the project in 1969.&lt;br /&gt;
Multics is a time sharing OS which provides Multitasking and Multiprogramming. &lt;br /&gt;
&lt;br /&gt;
It provides following features:&lt;br /&gt;
1. Utility Computing&lt;br /&gt;
2. Access Control Lists&lt;br /&gt;
3. Single level storage&lt;br /&gt;
4. Dynamic linking&lt;br /&gt;
5. Hot swapping&lt;br /&gt;
&lt;br /&gt;
It is not a Distributed OS but it a Centralized system which was written in the assembly language.&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=19683</id>
		<title>DistOS 2015W Session 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=19683"/>
		<updated>2015-01-20T03:48:27Z</updated>

		<summary type="html">&lt;p&gt;Shivjot: Created page with &amp;quot;Reading Response Discussion  ---- &amp;#039;&amp;#039;&amp;#039;Multics&amp;#039;&amp;#039;&amp;#039;  ----  Team: Sameer, Shivjot, Ambalica, Veena  It came into being in the 1960s and it completely vanished in 2000s. It was star...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Reading Response Discussion&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&#039;&#039;&#039;Multics&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Team: Sameer, Shivjot, Ambalica, Veena&lt;br /&gt;
&lt;br /&gt;
It came into being in the 1960s and it completely vanished in 2000s. It was started by Bell and MIT but Bell backed out of the project in 1969.&lt;br /&gt;
Multics is a time sharing OS which provides Multitasking and Multiprogramming. &lt;br /&gt;
&lt;br /&gt;
It provides following features:&lt;br /&gt;
1. Utility Computing&lt;br /&gt;
2. Access Control Lists&lt;br /&gt;
3. Single level storage&lt;br /&gt;
4. Dynamic linking&lt;br /&gt;
5. Hot swapping&lt;br /&gt;
&lt;br /&gt;
It is not a Distributed OS but it a Centralized system which was written in the assembly language.&lt;/div&gt;</summary>
		<author><name>Shivjot</name></author>
	</entry>
</feed>