<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ksherif</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ksherif"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Ksherif"/>
	<updated>2026-05-12T18:07:17Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_12&amp;diff=20063</id>
		<title>DistOS 2015W Session 12</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_12&amp;diff=20063"/>
		<updated>2015-03-30T23:23:55Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: Created page with &amp;quot;=Haystack= =Comet= *Introduced the concept of distributed shared memory (DSM). In a DSM, RAMs from multiple servers would appear as if they are all belonging to one server, al...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Haystack=&lt;br /&gt;
=Comet=&lt;br /&gt;
*Introduced the concept of distributed shared memory (DSM). In a DSM, RAMs from multiple servers would appear as if they are all belonging to one server, allowing better scalability for caching.&lt;br /&gt;
*Comet model works by offloading the computation intensive process from the mobile to only one server.&lt;br /&gt;
*The offloading process works by passing the computation intensive process to the server and hold it on the mobile device. Once the process on the server completes, it returns the results and the handle back to the mobile device. In other words, the process does not get physically offloaded to the server but instead it runs on the server and stopped on the mobile device. &lt;br /&gt;
=F4= &lt;br /&gt;
=Sapphire=&lt;br /&gt;
*Represents a building block towards building this global distributed systems. The main critique to it is that it didn’t present a specific use case upon which their design is built upon.&lt;br /&gt;
*Sapphire does not show their scalability boundaries. There is no such distributed system model that can be “one size fits all”, most probably it will break in some large scale distributed application.&lt;br /&gt;
*Reaching this global distributed system that address all the distributed OS use cases will be a cumulative work of many big bodies and building it block by block and then this system will evolve by putting all these different building blocks together. In other words, reaching a global distributed system will come from a “bottom up not top down approach” [Somayaji, 2015].&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20035</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20035"/>
		<updated>2015-03-23T23:17:13Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
&lt;br /&gt;
== Dynamo==&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
==Cassandra==&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;br /&gt;
=Spanner=&lt;br /&gt;
*Provided data consistency and Supports SQL like Interface&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20034</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20034"/>
		<updated>2015-03-23T23:16:45Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
&lt;br /&gt;
== Dynamo==&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;br /&gt;
==Cassandra==&lt;br /&gt;
*Partitions data across the cluster using consistent hashing.&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20033</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20033"/>
		<updated>2015-03-23T23:13:43Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: /* BigTable */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;br /&gt;
&lt;br /&gt;
== Dynamo==&lt;br /&gt;
*Availability is the buzz word for Dynamo. Dynamo=Availability&lt;br /&gt;
*Shifted Computer Science paradigm from caring about the consistency to availability.&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20032</id>
		<title>DistOS 2015W Session 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_11&amp;diff=20032"/>
		<updated>2015-03-23T23:10:58Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: Created page with &amp;quot;==BigTable==&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==BigTable==&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20014</id>
		<title>DistOS 2015W Session 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20014"/>
		<updated>2015-03-16T22:22:39Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Feel free to tweak the questions!&lt;br /&gt;
&lt;br /&gt;
==Kademlia==&lt;br /&gt;
Members: Kirill, Deep, Jason, Hassan&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Why are DHTs relevant to distributed OSs?&#039;&#039;&#039;&lt;br /&gt;
** Using many system, having repetition&lt;br /&gt;
** DHT to distributed content over  multiple nodes&lt;br /&gt;
** Decentralized therefore peer-to-peer&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;How is content divided?&#039;&#039;&#039;&lt;br /&gt;
** File hashes&lt;br /&gt;
** Node ID to locate the value&lt;br /&gt;
** 160 bit key space, binary tree for partition and searching down the tree&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;How is the network traversed?&#039;&#039;&#039;&lt;br /&gt;
** Highest prefix and increase the number of digit it used to get close to the target nodes&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;What trust assumptions does the system make?&#039;&#039;&#039;&lt;br /&gt;
** DHT by itself is insecure&lt;br /&gt;
** The academic and practitioner communities have realized that all current DHT designs suffer from a security weakness, known as the Sybil attack&lt;br /&gt;
** K-buckets&lt;br /&gt;
*** Binary tree with each node having k-buckets as leaf&lt;br /&gt;
*** Any given k-node is very unlikely to fail within an hour of each other&lt;br /&gt;
*** New nodes are only inserted when there is room in the bucket or the oldest node doesn&#039;t respond&lt;br /&gt;
** Uses UDP therefore packets are often lost&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Performance constraints?&#039;&#039;&#039;&lt;br /&gt;
** Binary tree traversing, therefore traversing is maximum O(log n)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Kademlia for this purpose?&#039;&#039;&#039;&lt;br /&gt;
** DNS&lt;br /&gt;
** Any kind of meta-data service&lt;br /&gt;
&lt;br /&gt;
==Comet==&lt;br /&gt;
Members: Mohamed Ahmed, Apoorv Sangal, Ambalica Sharma&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
DHT is an infrastructure than enables many clients to share information, and scale to handle node arrival, departure and failure. DHT&#039;s serve many of the design goals of disbtributed operating systems. The paper states that &amp;quot;DHTs are increasingly used to support a variety of distributed applications, such as file-sharing, distributed resource tracking, end-system multicast, publish-subscribe systems, distributed search engines&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
One of the three main components of the comet system is a routing substrate which&lt;br /&gt;
implements the value/node mapping. This allows a client to find the node htat stores&lt;br /&gt;
a specific data item. Since Comet uses a DHT implementation, routing occurs by applying&lt;br /&gt;
a hash function to the key to compute node ID&#039;s that store the associated value.&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
Assumes clients re untrusted autonomous nodes. &lt;br /&gt;
&lt;br /&gt;
A client node running Comet should be protected from the execution of handlers &lt;br /&gt;
e.g. an executing handler cannot corrupt the node or use unlimited resources. &lt;br /&gt;
Handlers should not be able to mount messaging attacks on other nodes.&lt;br /&gt;
&lt;br /&gt;
Users downloading Comet must trust it and have guarantees about its behavior. For this reason, Comet enforces four important restrictions:&lt;br /&gt;
1. Limited knowledge: an ASO is not aware of other objects&lt;br /&gt;
or resources stored on the same node and has no&lt;br /&gt;
direct way to learn about them.&lt;br /&gt;
2. Limited access: an object handler can manipulate only its own value and cannot modify the values of other objects on its storage node.&lt;br /&gt;
3. Limited communication: an active storage object cannot&lt;br /&gt;
send arbitrary messages over the network.&lt;br /&gt;
4. Limited resource consumption: an ASO’s resource usage is strictly bounded, e.g., the system limits the amount of computation and memory it can consume.&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Comet for this purpose?&lt;br /&gt;
&lt;br /&gt;
==Tapestry==&lt;br /&gt;
Members: Ashley, Dany, Alexis, Khaled&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
&lt;br /&gt;
Because they provide a way to distribute information over large networks (distributed key/value store).&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
&lt;br /&gt;
Uses consistent hashing (SHA-1), upon node creation (join) creates optimal routing table.&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
You look at your neighbours, you see which neighbour is closest to your destination, and recurse.&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
&lt;br /&gt;
It assumes the system is entirely trustworthy from adversaries. While network failures may happen and nodes may go down, no node will explicitly try to mess with the network.&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
O(log n) access times to any given node. Best effort publishing/unpublishing via decentralized object location routing.&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Tapestry for this purpose?&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=19980</id>
		<title>DistOS 2015W Session 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=19980"/>
		<updated>2015-03-12T01:40:03Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: /* SETI@Home */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== BOINC ==&lt;br /&gt;
&lt;br /&gt;
*Public Resource Computing Platform&lt;br /&gt;
*Gives scientists the ability to use large amounts of computation resources.&lt;br /&gt;
*The clients do not connect directly with each other but instead they talk to a central server located at Berkley&lt;br /&gt;
*The goals of Boinc are: &lt;br /&gt;
:*1) reduce the barriers of entry&lt;br /&gt;
:*2) Share resources among autonomous projects&lt;br /&gt;
:*3) Support diverse applications&lt;br /&gt;
:*4) Reward participants.&lt;br /&gt;
 A BOINC application can be identified by a single master URL, &amp;lt;br/&amp;gt;which serves as the homepage as well as the directory of the servers.&lt;br /&gt;
&lt;br /&gt;
== SETI@Home ==&lt;br /&gt;
&lt;br /&gt;
*Uses public resource computing to analyze radio signals to find extraterrestrial intelligence&lt;br /&gt;
*Need good quality telescope to search for radio signals, and lots of computational power, which was unavailable locally&lt;br /&gt;
*It has not yet found extraterrestrial intelligence, but its has established credibility of public resource computing projects which are given by the public&lt;br /&gt;
*Uses BOINC as a backbone for the project&lt;br /&gt;
*Uses relational database to store information on a large scale, further it uses a multi-threaded server to distribute work to clients&lt;br /&gt;
*Quality of data in this architecture is untrustworthy, the main incentive to use it, however, is that it is a cheap and easy way of scaling the work exponentially.&lt;br /&gt;
*Provided social incentives to encourage users to join the system.&lt;br /&gt;
*This computation model still exists but not in the legitimate world.&lt;br /&gt;
&lt;br /&gt;
== MapReduce ==&lt;br /&gt;
&lt;br /&gt;
*A programming model presented by Google to do large scale parallel computations&lt;br /&gt;
*Uses the &amp;lt;code&amp;gt;Map()&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Reduce()&amp;lt;/code&amp;gt; functions from functional style programming languages&lt;br /&gt;
:*Map (Filtering)&lt;br /&gt;
::*Takes a function and applies it to all elements of the given data set&lt;br /&gt;
:*Reduce (Summary)&lt;br /&gt;
::*Accumulates results from the data set using a given function&lt;br /&gt;
&lt;br /&gt;
== Naiad ==&lt;br /&gt;
&lt;br /&gt;
*A programming model similar to &amp;lt;code&amp;gt;MapReduce&amp;lt;/code&amp;gt; but with streaming capabilities so that data results are almost instantaneous&lt;br /&gt;
*A distributed system for executing data parallel cyclic dataflow programs offering high throughput and low latency&lt;br /&gt;
*Aims to provide a general purpose system which will fulfill the requirements and the will also support wide variety of high level programming models.&lt;br /&gt;
*Real Time Applications:&lt;br /&gt;
:*Batch iterative Machine Learning: &lt;br /&gt;
VW, an open source distributed machine learning performs iteration in 3 phases: each process updates local state; processes independently training on local data; and process jointly performed global average which is All Reduce.&lt;br /&gt;
:*Streaming Acyclic Computation&lt;br /&gt;
When compared to a system called [http://research.microsoft.com/apps/pubs/default.aspx?id=163832 Kineograph] ( also done by Microsoft ), which processes twitter handles and provides counts of the occurrence of hashtags as well as links between popular tags, was written using Naiad in 26 lines of code and ran close to 2X faster.&lt;br /&gt;
* Naiad paper won the best paper award in SOSP 2013, check-out this link in Microsoft Research website http://research.microsoft.com/en-us/projects/naiad/ . Down in this page you can see some videos that explains naiad including Derek&#039;s Murray presentation at SOSP 2013.&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=19979</id>
		<title>DistOS 2015W Session 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=19979"/>
		<updated>2015-03-12T01:39:36Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: /* BOINC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== BOINC ==&lt;br /&gt;
&lt;br /&gt;
*Public Resource Computing Platform&lt;br /&gt;
*Gives scientists the ability to use large amounts of computation resources.&lt;br /&gt;
*The clients do not connect directly with each other but instead they talk to a central server located at Berkley&lt;br /&gt;
*The goals of Boinc are: &lt;br /&gt;
:*1) reduce the barriers of entry&lt;br /&gt;
:*2) Share resources among autonomous projects&lt;br /&gt;
:*3) Support diverse applications&lt;br /&gt;
:*4) Reward participants.&lt;br /&gt;
 A BOINC application can be identified by a single master URL, &amp;lt;br/&amp;gt;which serves as the homepage as well as the directory of the servers.&lt;br /&gt;
&lt;br /&gt;
== SETI@Home ==&lt;br /&gt;
&lt;br /&gt;
*Uses public resource computing to analyze radio signals to find extraterrestrial intelligence&lt;br /&gt;
*Need good quality telescope to search for radio signals, and lots of computational power, which was unavailable locally&lt;br /&gt;
*It has not yet found extraterrestrial intelligence, but its has established credibility of public resource computing projects which are given by the public&lt;br /&gt;
*Uses BONIC as a backbone for the project&lt;br /&gt;
*Uses relational database to store information on a large scale, further it uses a multi-threaded server to distribute work to clients&lt;br /&gt;
*Quality of data in this architecture is untrustworthy, the main incentive to use it, however, is that it is a cheap and easy way of scaling the work exponentially.&lt;br /&gt;
*Provided social incentives to encourage users to join the system.&lt;br /&gt;
*This computation model still exists but not in the legitimate world.&lt;br /&gt;
&lt;br /&gt;
== MapReduce ==&lt;br /&gt;
&lt;br /&gt;
*A programming model presented by Google to do large scale parallel computations&lt;br /&gt;
*Uses the &amp;lt;code&amp;gt;Map()&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Reduce()&amp;lt;/code&amp;gt; functions from functional style programming languages&lt;br /&gt;
:*Map (Filtering)&lt;br /&gt;
::*Takes a function and applies it to all elements of the given data set&lt;br /&gt;
:*Reduce (Summary)&lt;br /&gt;
::*Accumulates results from the data set using a given function&lt;br /&gt;
&lt;br /&gt;
== Naiad ==&lt;br /&gt;
&lt;br /&gt;
*A programming model similar to &amp;lt;code&amp;gt;MapReduce&amp;lt;/code&amp;gt; but with streaming capabilities so that data results are almost instantaneous&lt;br /&gt;
*A distributed system for executing data parallel cyclic dataflow programs offering high throughput and low latency&lt;br /&gt;
*Aims to provide a general purpose system which will fulfill the requirements and the will also support wide variety of high level programming models.&lt;br /&gt;
*Real Time Applications:&lt;br /&gt;
:*Batch iterative Machine Learning: &lt;br /&gt;
VW, an open source distributed machine learning performs iteration in 3 phases: each process updates local state; processes independently training on local data; and process jointly performed global average which is All Reduce.&lt;br /&gt;
:*Streaming Acyclic Computation&lt;br /&gt;
When compared to a system called [http://research.microsoft.com/apps/pubs/default.aspx?id=163832 Kineograph] ( also done by Microsoft ), which processes twitter handles and provides counts of the occurrence of hashtags as well as links between popular tags, was written using Naiad in 26 lines of code and ran close to 2X faster.&lt;br /&gt;
* Naiad paper won the best paper award in SOSP 2013, check-out this link in Microsoft Research website http://research.microsoft.com/en-us/projects/naiad/ . Down in this page you can see some videos that explains naiad including Derek&#039;s Murray presentation at SOSP 2013.&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=19975</id>
		<title>DistOS 2015W Session 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=19975"/>
		<updated>2015-03-11T03:10:27Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== BOINC ==&lt;br /&gt;
&lt;br /&gt;
*Public Resource Computing Platform&lt;br /&gt;
*Gives scientists the ability to use large amounts of computation resources.&lt;br /&gt;
*The clients do not connect directly with each other but instead they talk to a central server located at Berkley&lt;br /&gt;
*The goals of Bonic are: &lt;br /&gt;
:*1) reduce the barriers of entry&lt;br /&gt;
:*2) Share resources among autonomous projects&lt;br /&gt;
:*3) Support diverse applications&lt;br /&gt;
:*4) Reward participants.&lt;br /&gt;
 A BONIC application can be identified by a single master URL, &amp;lt;br/&amp;gt;which serves as the homepage as well as the directory of the servers.&lt;br /&gt;
&lt;br /&gt;
== SETI@Home ==&lt;br /&gt;
&lt;br /&gt;
*Uses public resource computing to analyze radio signals to find extraterrestrial intelligence&lt;br /&gt;
*Need good quality telescope to search for radio signals, and lots of computational power, which was unavailable locally&lt;br /&gt;
*It has not yet found extraterrestrial intelligence, but its has established credibility of public resource computing projects which are given by the public&lt;br /&gt;
*Uses BONIC as a backbone for the project&lt;br /&gt;
*Uses relational database to store information on a large scale, further it uses a multi-threaded server to distribute work to clients&lt;br /&gt;
*Quality of data in this architecture is untrustworthy, the main incentive to use it, however, is that it is a cheap and easy way of scaling the work exponentially.&lt;br /&gt;
*Provided social incentives to encourage users to join the system.&lt;br /&gt;
*This computation model still exists but not in the legitimate world.&lt;br /&gt;
&lt;br /&gt;
== MapReduce ==&lt;br /&gt;
&lt;br /&gt;
*A programming model presented by Google to do large scale parallel computations&lt;br /&gt;
*Uses the &amp;lt;code&amp;gt;Map()&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Reduce()&amp;lt;/code&amp;gt; functions from functional style programming languages&lt;br /&gt;
:*Map (Filtering)&lt;br /&gt;
::*Takes a function and applies it to all elements of the given data set&lt;br /&gt;
:*Reduce (Summary)&lt;br /&gt;
::*Accumulates results from the data set using a given function&lt;br /&gt;
&lt;br /&gt;
== Naiad ==&lt;br /&gt;
&lt;br /&gt;
*A programming model similar to &amp;lt;code&amp;gt;MapReduce&amp;lt;/code&amp;gt; but with streaming capabilities so that data results are almost instantaneous&lt;br /&gt;
*A distributed system for executing data parallel cyclic dataflow programs offering high throughput and low latency&lt;br /&gt;
*Aims to provide a general purpose system which will fulfill the requirements and the will also support wide variety of high level programming models.&lt;br /&gt;
*Real Time Applications:&lt;br /&gt;
:*Batch iterative Machine Learning: &lt;br /&gt;
VW, an open source distributed machine learning performs iteration in 3 phases: each process updates local state; processes independently training on local data; and process jointly performed global average which is All Reduce.&lt;br /&gt;
:*Streaming Acyclic Computation&lt;br /&gt;
When compared to a system called [http://research.microsoft.com/apps/pubs/default.aspx?id=163832 Kineograph] ( also done by Microsoft ), which processes twitter handles and provides counts of the occurrence of hashtags as well as links between popular tags, was written using Naiad in 26 lines of code and ran close to 2X faster.&lt;br /&gt;
* Naiad paper won the best paper award in SOSP 2013, check-out this link in Microsoft Research website http://research.microsoft.com/en-us/projects/naiad/ . Down in this page you can see some videos that explains naiad including Derek&#039;s Murray presentation at SOSP 2013.&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=19974</id>
		<title>DistOS 2015W Session 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=19974"/>
		<updated>2015-03-11T02:40:31Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: /* SETI@Home */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== BOINC ==&lt;br /&gt;
&lt;br /&gt;
*Public Resource Computing Platform&lt;br /&gt;
*Gives scientists the ability to use large amounts of computation resources.&lt;br /&gt;
*The clients do not connect directly with each other but instead they talk to a central server located at Berkley&lt;br /&gt;
*The goals of Bonic are: &lt;br /&gt;
:*1) reduce the barriers of entry&lt;br /&gt;
:*2) Share resources among autonomous projects&lt;br /&gt;
:*3) Support diverse applications&lt;br /&gt;
:*4) Reward participants.&lt;br /&gt;
 A BONIC application can be identified by a single master URL, &amp;lt;br/&amp;gt;which serves as the homepage as well as the directory of the servers.&lt;br /&gt;
&lt;br /&gt;
== SETI@Home ==&lt;br /&gt;
&lt;br /&gt;
*Uses public resource computing to analyze radio signals to find extraterrestrial intelligence&lt;br /&gt;
*Need good quality telescope to search for radio signals, and lots of computational power, which was unavailable locally&lt;br /&gt;
*It has not yet found extraterrestrial intelligence, but its has established credibility of public resource computing projects which are given by the public&lt;br /&gt;
*Uses BONIC as a backbone for the project&lt;br /&gt;
*Uses relational database to store information on a large scale, further it uses a multi-threaded server to distribute work to clients&lt;br /&gt;
*Quality of data in this architecture is untrustworthy, the main incentive to use it, however, is that it is a cheap and easy way of scaling the work exponentially.&lt;br /&gt;
*Provided social incentives to encourage users to join the system.&lt;br /&gt;
*This computation model still exists but not in the legitimate world.&lt;br /&gt;
&lt;br /&gt;
== MapReduce ==&lt;br /&gt;
&lt;br /&gt;
*A programming model presented by Google to do large scale parallel computations&lt;br /&gt;
*Uses the &amp;lt;code&amp;gt;Map()&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Reduce()&amp;lt;/code&amp;gt; functions from functional style programming languages&lt;br /&gt;
:*Map (Filtering)&lt;br /&gt;
::*Takes a function and applies it to all elements of the given data set&lt;br /&gt;
:*Reduce (Summary)&lt;br /&gt;
::*Accumulates results from the data set using a given function&lt;br /&gt;
&lt;br /&gt;
== Naiad ==&lt;br /&gt;
&lt;br /&gt;
*A programming model similar to &amp;lt;code&amp;gt;MapReduce&amp;lt;/code&amp;gt; but with streaming capabilities so that data results are almost instantaneous&lt;br /&gt;
*A distributed system for executing data parallel cyclic dataflow programs offering high throughput and low latency&lt;br /&gt;
*Aims to provide a general purpose system which will fulfill the requirements and the will also support wide variety of high level programming models.&lt;br /&gt;
*Real Time Applications:&lt;br /&gt;
:*Batch iterative Machine Learning: &lt;br /&gt;
VW, an open source distributed machine learning performs iteration in 3 phases: each process updates local state; processes independently training on local data; and process jointly performed global average which is All Reduce.&lt;br /&gt;
:*Streaming Acyclic Computation&lt;br /&gt;
When compared to a system called [http://research.microsoft.com/apps/pubs/default.aspx?id=163832 Kineograph] ( also done by Microsoft ), which processes twitter handles and provides counts of the occurrence of hashtags as well as links between popular tags, was written using Naiad in 26 lines of code and ran close to 2X faster.&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=19973</id>
		<title>DistOS 2015W Session 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_9&amp;diff=19973"/>
		<updated>2015-03-11T02:39:39Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: /* BOINC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== BOINC ==&lt;br /&gt;
&lt;br /&gt;
*Public Resource Computing Platform&lt;br /&gt;
*Gives scientists the ability to use large amounts of computation resources.&lt;br /&gt;
*The clients do not connect directly with each other but instead they talk to a central server located at Berkley&lt;br /&gt;
*The goals of Bonic are: &lt;br /&gt;
:*1) reduce the barriers of entry&lt;br /&gt;
:*2) Share resources among autonomous projects&lt;br /&gt;
:*3) Support diverse applications&lt;br /&gt;
:*4) Reward participants.&lt;br /&gt;
 A BONIC application can be identified by a single master URL, &amp;lt;br/&amp;gt;which serves as the homepage as well as the directory of the servers.&lt;br /&gt;
&lt;br /&gt;
== SETI@Home ==&lt;br /&gt;
&lt;br /&gt;
*Uses public resource computing to analyze radio signals to find extraterrestrial intelligence&lt;br /&gt;
*Need good quality telescope to search for radio signals, and lots of computational power, which was unavailable locally&lt;br /&gt;
*It has not yet found extraterrestrial intelligence, but its has established credibility of public resource computing projects which are given by the public&lt;br /&gt;
*Uses BONIC as a backbone for the project&lt;br /&gt;
*Uses relational database to store information on a large scale, further it uses a multi-threaded server to distribute work to clients&lt;br /&gt;
&lt;br /&gt;
== MapReduce ==&lt;br /&gt;
&lt;br /&gt;
*A programming model presented by Google to do large scale parallel computations&lt;br /&gt;
*Uses the &amp;lt;code&amp;gt;Map()&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Reduce()&amp;lt;/code&amp;gt; functions from functional style programming languages&lt;br /&gt;
:*Map (Filtering)&lt;br /&gt;
::*Takes a function and applies it to all elements of the given data set&lt;br /&gt;
:*Reduce (Summary)&lt;br /&gt;
::*Accumulates results from the data set using a given function&lt;br /&gt;
&lt;br /&gt;
== Naiad ==&lt;br /&gt;
&lt;br /&gt;
*A programming model similar to &amp;lt;code&amp;gt;MapReduce&amp;lt;/code&amp;gt; but with streaming capabilities so that data results are almost instantaneous&lt;br /&gt;
*A distributed system for executing data parallel cyclic dataflow programs offering high throughput and low latency&lt;br /&gt;
*Aims to provide a general purpose system which will fulfill the requirements and the will also support wide variety of high level programming models.&lt;br /&gt;
*Real Time Applications:&lt;br /&gt;
:*Batch iterative Machine Learning: &lt;br /&gt;
VW, an open source distributed machine learning performs iteration in 3 phases: each process updates local state; processes independently training on local data; and process jointly performed global average which is All Reduce.&lt;br /&gt;
:*Streaming Acyclic Computation&lt;br /&gt;
When compared to a system called [http://research.microsoft.com/apps/pubs/default.aspx?id=163832 Kineograph] ( also done by Microsoft ), which processes twitter handles and provides counts of the occurrence of hashtags as well as links between popular tags, was written using Naiad in 26 lines of code and ran close to 2X faster.&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_8&amp;diff=19931</id>
		<title>DistOS 2015W Session 8</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_8&amp;diff=19931"/>
		<updated>2015-03-03T01:36:53Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: Created page with &amp;quot;* The link to Vannevar Bush’s article, “As we may think” http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/   * How both the article and the vid...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* The link to Vannevar Bush’s article, “As we may think” http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/ &lt;br /&gt;
&lt;br /&gt;
* How both the article and the video relates to the course? &lt;br /&gt;
The creation of the Web is basically what drove the need to have a way of connecting thousands of machines and develop a mechanism between these machines to share files and data in an efficient way. In other words, the Science of Distributed Operating Systems has evolved as a result of the creation of the Web and it’s exponential growth.&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=19887</id>
		<title>DistOS 2015W Session 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=19887"/>
		<updated>2015-02-24T02:37:36Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: /* Chubby */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Ceph =&lt;br /&gt;
* Key advantage is that it is a general purpose distributed file system.  &lt;br /&gt;
* System is composed of three units:&lt;br /&gt;
	*Client,&lt;br /&gt;
	*Cluster of Object Storage device (OSD),&lt;br /&gt;
	*MetaData Server (MDS).&lt;br /&gt;
*CRUSH (Controlled, Replicated, Under Scalable, Hashing) is the hashing algorithm used to calculate the location of object instead of looking for them. The CRUSH paper on Ceph’s website can be downloaded from here http://ceph.com/papers/weil-crush-sc06.pdf.&lt;br /&gt;
* RADOS (Reliable Autonomic Distributed Object-Store) is the object store for Ceph.&lt;br /&gt;
&lt;br /&gt;
= Chubby =&lt;br /&gt;
* Is a consensus algorithm among a set of servers to agree on who is the master that is in charge of the metadata.&lt;br /&gt;
* Can be considered a distributed file system for small size files only “256 KB” with very low scalability “5 servers”.&lt;br /&gt;
* Is defined in the paper as “A lock service used within a loosely-coupled distributed system consisting of moderately large number of small machines connected by a high speed network”.&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=19886</id>
		<title>DistOS 2015W Session 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=19886"/>
		<updated>2015-02-24T02:37:16Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: /* Ceph */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Ceph =&lt;br /&gt;
* Key advantage is that it is a general purpose distributed file system.  &lt;br /&gt;
* System is composed of three units:&lt;br /&gt;
	*Client,&lt;br /&gt;
	*Cluster of Object Storage device (OSD),&lt;br /&gt;
	*MetaData Server (MDS).&lt;br /&gt;
*CRUSH (Controlled, Replicated, Under Scalable, Hashing) is the hashing algorithm used to calculate the location of object instead of looking for them. The CRUSH paper on Ceph’s website can be downloaded from here http://ceph.com/papers/weil-crush-sc06.pdf.&lt;br /&gt;
* RADOS (Reliable Autonomic Distributed Object-Store) is the object store for Ceph.&lt;br /&gt;
&lt;br /&gt;
= Chubby =&lt;br /&gt;
* Is a consensus algorithm among a set of servers to agree on who is the master that is in charge of the metadata.&lt;br /&gt;
* Can be considered a distributed file system for small size files only “256 KB” with very low scalability “5 servers” &lt;br /&gt;
* Is defined in the paper as “A lock service used within a loosely-coupled distributed system consisting of moderately large number of small machines connected by a high speed network”.&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=19885</id>
		<title>DistOS 2015W Session 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=19885"/>
		<updated>2015-02-24T02:36:35Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: /* Ceph */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Ceph =&lt;br /&gt;
* Key advantage is that it is a general purpose distributed file system.  &lt;br /&gt;
* System is composed of three units:&lt;br /&gt;
	*Client.&lt;br /&gt;
	*Cluster of Object Storage device (OSD).&lt;br /&gt;
	*MetaData Server (MDS).&lt;br /&gt;
*CRUSH (Controlled, Replicated, Under Scalable, Hashing) is the hashing algorithm used to calculate the location of object instead of looking for them. The CRUSH paper on Ceph’s website can be downloaded from here http://ceph.com/papers/weil-crush-sc06.pdf.&lt;br /&gt;
* RADOS (Reliable Autonomic Distributed Object-Store) is the object store for Ceph.&lt;br /&gt;
&lt;br /&gt;
= Chubby =&lt;br /&gt;
* Is a consensus algorithm among a set of servers to agree on who is the master that is in charge of the metadata.&lt;br /&gt;
* Can be considered a distributed file system for small size files only “256 KB” with very low scalability “5 servers” &lt;br /&gt;
* Is defined in the paper as “A lock service used within a loosely-coupled distributed system consisting of moderately large number of small machines connected by a high speed network”.&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=19884</id>
		<title>DistOS 2015W Session 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=19884"/>
		<updated>2015-02-24T02:33:50Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: /* Chubby */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Ceph =&lt;br /&gt;
* Key advantage is that it is a general purpose distributed file system.  &lt;br /&gt;
* System is composed of three units:&lt;br /&gt;
	*Client.&lt;br /&gt;
	*Cluster of Object Storage device (OSD).&lt;br /&gt;
	*MetaData Server (MDS)&lt;br /&gt;
*CRUSH (Controlled, Replicated, Under Scalable, Hashing) is the hashing algorithm used to calculate the location of object instead of looking for them. The CRUSH paper on Ceph’s website can be downloaded from here http://ceph.com/papers/weil-crush-sc06.pdf.&lt;br /&gt;
* RADOS (Reliable Autonomic Distributed Object-Store) is the object store for Ceph.&lt;br /&gt;
 &lt;br /&gt;
= Chubby =&lt;br /&gt;
* Is a consensus algorithm among a set of servers to agree on who is the master that is in charge of the metadata.&lt;br /&gt;
* Can be considered a distributed file system for small size files only “256 KB” with very low scalability “5 servers” &lt;br /&gt;
* Is defined in the paper as “A lock service used within a loosely-coupled distributed system consisting of moderately large number of small machines connected by a high speed network”.&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=19883</id>
		<title>DistOS 2015W Session 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=19883"/>
		<updated>2015-02-24T02:33:01Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: /* Ceph */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Ceph =&lt;br /&gt;
* Key advantage is that it is a general purpose distributed file system.  &lt;br /&gt;
* System is composed of three units:&lt;br /&gt;
	*Client.&lt;br /&gt;
	*Cluster of Object Storage device (OSD).&lt;br /&gt;
	*MetaData Server (MDS)&lt;br /&gt;
*CRUSH (Controlled, Replicated, Under Scalable, Hashing) is the hashing algorithm used to calculate the location of object instead of looking for them. The CRUSH paper on Ceph’s website can be downloaded from here http://ceph.com/papers/weil-crush-sc06.pdf.&lt;br /&gt;
* RADOS (Reliable Autonomic Distributed Object-Store) is the object store for Ceph.&lt;br /&gt;
 &lt;br /&gt;
= Chubby =&lt;br /&gt;
* Is a consensus algorithm among a set of servers to agree on who is the master that is in charge of the metadata.&lt;br /&gt;
* Can be considered a distributed file system for small size files only “256 KB” with very low scalability “5 servers” &lt;br /&gt;
Is defined in the paper as “A lock service used within a loosely-coupled distributed system consisting of moderately large number of small machines connected by a high speed network”.&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=19882</id>
		<title>DistOS 2015W Session 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=19882"/>
		<updated>2015-02-24T02:32:22Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: Created page with &amp;quot;= Ceph = * Key advantage is that it is a general purpose distributed file system.   * System is composed of three units: 	*Client. 	*Cluster of Object Storage device (OSD). 	*...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Ceph =&lt;br /&gt;
* Key advantage is that it is a general purpose distributed file system.  &lt;br /&gt;
* System is composed of three units:&lt;br /&gt;
	*Client.&lt;br /&gt;
	*Cluster of Object Storage device (OSD).&lt;br /&gt;
	*MetaData Server (MDS)&lt;br /&gt;
*CRUSH (Controlled, Replicated, Under Scalable, Hashing) is the hashing algorithm used to calculate the location of object instead of looking for them. The CRUSH paper on Ceph’s website can be downloaded from here http://ceph.com/papers/weil-crush-sc06.pdf.&lt;br /&gt;
* RADOS (Reliable Autonomic Distributed Object-Store) is the object store for Ceph.&lt;br /&gt;
  =Chubby =&lt;br /&gt;
* Is a consensus algorithm among a set of servers to agree on who is the master that is in charge of the metadata.&lt;br /&gt;
* Can be considered a distributed file system for small size files only “256 KB” with very low scalability “5 servers” &lt;br /&gt;
Is defined in the paper as “A lock service used within a loosely-coupled distributed system consisting of moderately large number of small machines connected by a high speed network”.&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_4&amp;diff=19740</id>
		<title>DistOS 2015W Session 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_4&amp;diff=19740"/>
		<updated>2015-01-30T03:21:23Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Amoeba Operating System =&lt;br /&gt;
&lt;br /&gt;
=== Capablities: ===&lt;br /&gt;
* Pointer to the object&lt;br /&gt;
* Capability assigning right to perform to some operation to the object ticket &lt;br /&gt;
* Communicate wide area network &lt;br /&gt;
* a kind of ticket or key that allows the holder of the capa- bility to perform some (not neces- sarily all) &lt;br /&gt;
* Each user process owns some collection of capabilities, which together define the set of objects it may access and the types of operations that my ne performed on each &lt;br /&gt;
* After the server has performed the operation, it sends back a reply message that unblocks the client &lt;br /&gt;
* Sending messages, blocking and accepting forms the remote procedure call that can be encapsulate using to make entire remote operation look like local procedure &lt;br /&gt;
* Second field:  used by the sever to identify which of its objects is being addressed server port and object number identify object which operation to performed  &lt;br /&gt;
* Generates 48-bit random number     &lt;br /&gt;
* The third field is the right field which contains a bit map telling which operation the holder of the capability  may performed &lt;br /&gt;
	&lt;br /&gt;
	&lt;br /&gt;
	&lt;br /&gt;
=== Thread Management: ===&lt;br /&gt;
* Same process have multiple thread and each process has its own registered counter and stack &lt;br /&gt;
* Behave like process&lt;br /&gt;
* It can synchronized using mutex semaphore &lt;br /&gt;
* File: Multiple thread, &lt;br /&gt;
* Blocked when there&#039;s multiple threads &lt;br /&gt;
* Buttlet thread the mutex&lt;br /&gt;
* The careful reader may have noticed that user process can pull 813kbytes/sec&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= The V Distributed System = &lt;br /&gt;
&lt;br /&gt;
* First tent in V design: High Performance communication is the most critical facility for distributed systems.&lt;br /&gt;
* Second; The protocols, not the software, define the system.&lt;br /&gt;
* Third; a relatively small operating system kernel can implement the basic protocols and services providing a simple network-transparent process, address space &amp;amp; communication model.&lt;br /&gt;
&lt;br /&gt;
=== Ideas that significantly affected the design ===&lt;br /&gt;
* Shared Memory.&lt;br /&gt;
* Dealing with group of entities same as they deal w/individual entities.&lt;br /&gt;
* Efficient file caching mechanism using the virtual memory caching mechanism.&lt;br /&gt;
&lt;br /&gt;
=== Design Decisions ===&lt;br /&gt;
* Designed for a cluster of workstations with high speed network access ( only really supports LAN ).&lt;br /&gt;
* Abstract the physical architecture of the participating workstations, by defining common protocols providing well-defined interfaces.&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_4&amp;diff=19736</id>
		<title>DistOS 2015W Session 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_4&amp;diff=19736"/>
		<updated>2015-01-29T03:55:07Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Amebo Operating System: Capablities&#039;&#039;&#039; &lt;br /&gt;
	* Pointer to the object&lt;br /&gt;
	* Capability assigning right to perform to some operation to the object ticket &lt;br /&gt;
	* Communicate wide area network &lt;br /&gt;
	* a kind of ticket or key that allows the holder of the capa- bility to perform some (not neces- sarily all) &lt;br /&gt;
	* Each user process owns some collection of capabilities, which together define the set of objects it may access and the types of operations that my ne performed on each &lt;br /&gt;
	* After the server has performed the operation, it sends back a reply message that unblocks the client &lt;br /&gt;
	* Sending messages, blocking and accepting forms the remote procedure call that can be encapsulate using to make entire remote operation look like local procedure &lt;br /&gt;
	* Second field:  used by the sever to identify which of its objects is being addressed server port and object number identify object which operation to performed  &lt;br /&gt;
	* Generates 48-bit random number     &lt;br /&gt;
	* The third field is the right field which contains a bit map telling which operation the holder of the capability  may performed &lt;br /&gt;
	&lt;br /&gt;
	&lt;br /&gt;
	&lt;br /&gt;
&#039;&#039;&#039;Thread Management:&#039;&#039;&#039;&lt;br /&gt;
	• Same process have multiple thread and each process has its own registered counter and stack &lt;br /&gt;
	• Behave like process&lt;br /&gt;
	• It can synchronized using mutex semaphore &lt;br /&gt;
	• File: Multiple thread, &lt;br /&gt;
	• Blocked when there&#039;s multiple threads &lt;br /&gt;
	• Buttlet thread the mutex&lt;br /&gt;
	• The careful reader may have noticed that user process can pull 813kbytes/sec&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= The V Distributed System = &lt;br /&gt;
&lt;br /&gt;
* First tent in V design: High Performance communication is the most critical facility for distributed systems.&lt;br /&gt;
* Second; The protocols, not the software, define the system.&lt;br /&gt;
* Third; a relatively small operating system kernel can implement the basic protocols and services providing a simple network-transparent process, address space &amp;amp; communication model.&lt;br /&gt;
&lt;br /&gt;
=== Ideas that significantly affected the design ===&lt;br /&gt;
* Shared Memory.&lt;br /&gt;
* Dealing with group of entities same as they deal w/individual entities.&lt;br /&gt;
* Efficient file caching mechanism using the virtual memory caching mechanism.&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_4&amp;diff=19735</id>
		<title>DistOS 2015W Session 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_4&amp;diff=19735"/>
		<updated>2015-01-29T03:54:38Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Amebo Operating System: Capablities&#039;&#039;&#039; &lt;br /&gt;
	* Pointer to the object&lt;br /&gt;
	* Capability assigning right to perform to some operation to the object ticket &lt;br /&gt;
	* Communicate wide area network &lt;br /&gt;
	* a kind of ticket or key that allows the holder of the capa- bility to perform some (not neces- sarily all) &lt;br /&gt;
	* Each user process owns some collection of capabilities, which together define the set of objects it may access and the types of operations that my ne performed on each &lt;br /&gt;
	* After the server has performed the operation, it sends back a reply message that unblocks the client &lt;br /&gt;
	* Sending messages, blocking and accepting forms the remote procedure call that can be encapsulate using to make entire remote operation look like local procedure &lt;br /&gt;
	* Second field:  used by the sever to identify which of its objects is being addressed server port and object number identify object which operation to performed  &lt;br /&gt;
	* Generates 48-bit random number     &lt;br /&gt;
	* The third field is the right field which contains a bit map telling which operation the holder of the capability  may performed &lt;br /&gt;
	&lt;br /&gt;
	&lt;br /&gt;
	&lt;br /&gt;
	&lt;br /&gt;
&#039;&#039;&#039;Thread Management:&#039;&#039;&#039;&lt;br /&gt;
	• Same process have multiple thread and each process has its own registered counter and stack &lt;br /&gt;
	• Behave like process&lt;br /&gt;
	• It can synchronized using mutex semaphore &lt;br /&gt;
	• File: Multiple thread, &lt;br /&gt;
	• Blocked when there&#039;s multiple threads &lt;br /&gt;
	• Buttlet thread the mutex&lt;br /&gt;
	• The careful reader may have noticed that user process can pull 813kbytes/sec&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= The V Distributed System = &lt;br /&gt;
&lt;br /&gt;
* First tent in V design: High Performance communication is the most critical facility for distributed systems.&lt;br /&gt;
* Second; The protocols, not the software, define the system.&lt;br /&gt;
* Third; a relatively small operating system kernel can implement the basic protocols and services providing a simple network-transparent process, address space &amp;amp; communication model.&lt;br /&gt;
&lt;br /&gt;
=== Ideas that significantly affected the design ===&lt;br /&gt;
* Shared Memory.&lt;br /&gt;
* Dealing with group of entities same as they deal w/individual entities.&lt;br /&gt;
* Efficient file caching mechanism using the virtual memory caching mechanism.&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_4&amp;diff=19734</id>
		<title>DistOS 2015W Session 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_4&amp;diff=19734"/>
		<updated>2015-01-29T03:53:02Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Amebo Operating System: Capablities&#039;&#039;&#039; &lt;br /&gt;
	• Pointer to the object&lt;br /&gt;
	• Capability assigning right to perform to some operation to the object ticket &lt;br /&gt;
	• Communicate wide area network &lt;br /&gt;
	• a kind of ticket or key that allows the holder of the capa- bility to perform some (not neces- sarily all) &lt;br /&gt;
	• Each user process owns some collection of capabilities, which together define the set of objects it may access and the types of operations that my ne performed on each &lt;br /&gt;
	• After the server has performed the operation, it sends back a reply message that unblocks the client &lt;br /&gt;
	• Sending messages, blocking and accepting forms the remote procedure call that can be encapsulate using to make entire remote operation look like local procedure &lt;br /&gt;
	• Second field:  used by the sever to identify which of its objects is being addressed server port and object number identify object which operation to performed  &lt;br /&gt;
	• Generates 48-bit random number     &lt;br /&gt;
	• The third field is the right field which contains a bit map telling which operation the holder of the capability  may performed &lt;br /&gt;
	&lt;br /&gt;
	&lt;br /&gt;
	&lt;br /&gt;
	&lt;br /&gt;
&#039;&#039;&#039;Thread Management:&#039;&#039;&#039;&lt;br /&gt;
	• Same process have multiple thread and each process has its own registered counter and stack &lt;br /&gt;
	• Behave like process&lt;br /&gt;
	• It can synchronized using mutex semaphore &lt;br /&gt;
	• File: Multiple thread, &lt;br /&gt;
	• Blocked when there&#039;s multiple threads &lt;br /&gt;
	• Buttlet thread the mutex&lt;br /&gt;
	• The careful reader may have noticed that user process can pull 813kbytes/sec&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= The V Distributed System = &lt;br /&gt;
&lt;br /&gt;
* First tent in V design: High Performance communication is the most critical facility for distributed systems.&lt;br /&gt;
* Second; The protocols, not the software, define the system.&lt;br /&gt;
* Third; a relatively small operating system kernel can implement the basic protocols and services providing a simple network-transparent process, address space &amp;amp; communication model.&lt;br /&gt;
&lt;br /&gt;
=== Ideas that significantly affected the design ===&lt;br /&gt;
* Shared Memory.&lt;br /&gt;
* Dealing with group of entities same as they deal w/individual entities.&lt;br /&gt;
* Efficient file caching mechanism using the virtual memory caching mechanism.&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=19712</id>
		<title>DistOS 2015W Session 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=19712"/>
		<updated>2015-01-26T03:31:09Z</updated>

		<summary type="html">&lt;p&gt;Ksherif: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Reading Response Discussion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&#039;&#039;&#039;Multics&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Team: Sameer, Shivjot, Ambalica, Veena&lt;br /&gt;
&lt;br /&gt;
It came into being in the 1960s and it completely vanished in 2000s. It was started by Bell, General Electric and MIT but Bell backed out of the project in 1969.&lt;br /&gt;
Multics is a time sharing OS which provides Multitasking and Multiprogramming. &lt;br /&gt;
&lt;br /&gt;
It provides following features:&lt;br /&gt;
1. Utility Computing&lt;br /&gt;
2. Access Control Lists&lt;br /&gt;
3. Single level storage&lt;br /&gt;
4. Dynamic linking&lt;br /&gt;
  *Sharded libraries or files can be loaded and linked to Random Access Memory at run time. &lt;br /&gt;
5. Hot swapping&lt;br /&gt;
6. Multiprocessing System&lt;br /&gt;
7. Ring oriented Security&lt;br /&gt;
   * It provides number of levels of authorization within the computer System.&lt;br /&gt;
It is not a Distributed OS but it a Centralized system which was written in the assembly language.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sprite&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Panel Group: Jamie, Hassan, Khaled&lt;br /&gt;
&lt;br /&gt;
Sprite had the following Design Features:&lt;br /&gt;
1. Network Transparency.&lt;br /&gt;
2. Process Migration.&lt;br /&gt;
3. Handling Cache Consistency&lt;br /&gt;
    a. Sequential file sharing===&amp;gt; By using a version number for each file.&lt;br /&gt;
    b. Concurrent write sharing==&amp;gt; Disable cache to clients.&lt;br /&gt;
4. The main design theme is to make aggressive use of RAM for caching files.&lt;/div&gt;</summary>
		<author><name>Ksherif</name></author>
	</entry>
</feed>