<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jasons</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jasons"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Jasons"/>
	<updated>2026-05-12T20:52:44Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_12&amp;diff=20209</id>
		<title>DistOS 2015W Session 12</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_12&amp;diff=20209"/>
		<updated>2015-04-20T14:23:41Z</updated>

		<summary type="html">&lt;p&gt;Jasons: /* Comet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [http://static.usenix.org/legacy/events/osdi10/tech/full_papers/Beaver.pdf Beaver et al., &amp;quot;Finding a needle in Haystack: Facebook’s photo storage&amp;quot; (OSDI 2010)]&lt;br /&gt;
* [https://www.usenix.org/conference/osdi12/technical-sessions/presentation/gordon Gordon et al., &amp;quot;COMET: Code Offload by Migrating Execution Transparently&amp;quot; (OSDI 2012)]&lt;br /&gt;
* [https://www.usenix.org/conference/osdi14/technical-sessions/presentation/muralidhar Muralidhar et al., &amp;quot;f4: Facebook&#039;s Warm BLOB Storage System&amp;quot; (OSDI 2014)]&lt;br /&gt;
* [https://www.usenix.org/conference/osdi14/technical-sessions/presentation/zhang Zhang et al., &amp;quot;Customizable and Extensible Deployment for Mobile/Cloud Applications&amp;quot; (OSDI 2014)]&lt;br /&gt;
&lt;br /&gt;
Session 12 had two separate themes running through their papers.&lt;br /&gt;
&lt;br /&gt;
The first was the storage of large volumes of data that would never be modified, rarely be deleted, and read with varying frequency distributions. This was a specific sub problem of the more general challenge of high performance scalable &amp;amp; reliable distributed storage, and once more leads the solvers at hand (Facebook) to design specialized systems to the exploit the specifics of the sub problem for superior performance.&lt;br /&gt;
&lt;br /&gt;
The second was a return to code offloading mechanisms that could turn programs distributed, in a new modern context. COMET works by taking existing smartphone applications and splicing the user interaction and computation between the phone and an external server. Sapphire provides a whole cluster of deployment modules that one can mix &amp;amp; match and apply to conformant programs to turn them into distributed applications.&lt;br /&gt;
&lt;br /&gt;
=Haystack=&lt;br /&gt;
* Facebook&#039;s Photo Application Storage System. &lt;br /&gt;
* Previous Fb photo storage based on NFS design. The reason why NFS didn&#039;t work is that it took 3 file-system accesses per logical photo read. Haystack only needs one access.&lt;br /&gt;
*Main goals of Haystack:&lt;br /&gt;
** High throughput with low latency. It uses one disk operation to provide these.&lt;br /&gt;
**Fault tolerance&lt;br /&gt;
**Cost effective&lt;br /&gt;
**Simple&lt;br /&gt;
*Facebook stored all images in haystack with a CDN in front to cache hot data. Haystack still needs to be fast since accessing non-cached data is still common.&lt;br /&gt;
*Haystack reduces the memory used for &#039;&#039;filesystem metadata&#039;&#039; &lt;br /&gt;
*It has 2 types of metadata:&lt;br /&gt;
**&#039;&#039;Application metadata&#039;&#039;&lt;br /&gt;
**&#039;&#039;File System metadata&#039;&#039;&lt;br /&gt;
* The architecture consists of 3 components:&lt;br /&gt;
**Haystack Store&lt;br /&gt;
**Haystack Directory&lt;br /&gt;
**Haystack Cache&lt;br /&gt;
*Pitchfork and bulk sync were used to tolerate faults. theTfault tolerance works in a very profound way to make haystack feasible and reliable&lt;br /&gt;
&lt;br /&gt;
=Comet=&lt;br /&gt;
*Introduced the concept of distributed shared memory (DSM). In a DSM, RAMs from multiple servers would appear as if they are all belonging to one server, allowing better scalability for caching.&lt;br /&gt;
*DSM provides advatage over RPC(Remote Procedure Call)  including multi threading suuport, thread migration during execution. &lt;br /&gt;
*client and server model maintain consistency using DSM&lt;br /&gt;
*Comet model works by offloading the computation intensive process from the mobile to only one server.&lt;br /&gt;
*The offloading process works by passing the computation intensive process to the server and hold it on the mobile device. Once the process on the server completes, it returns the results and the handle back to the mobile device. In other words, the process does not get physically offloaded to the server but instead it runs on the server and stopped on the mobile device.&lt;br /&gt;
*In Java Memory Model, memory reads and writes are partially ordered by a transitive &amp;quot;happens-before&amp;quot; relationship.&lt;br /&gt;
**Java Virtual Machine can keep track of data flow which DSM use to keep the heap, stacks, and locking states consistent across endpoints.&lt;br /&gt;
&lt;br /&gt;
=F4=&lt;br /&gt;
* Warm Blob Storage System.&lt;br /&gt;
** Warm Blob is a store for large quantities of immutable data that isn&#039;t frequently accessed, but must still be available.&lt;br /&gt;
** Built to reduce the overhead of haystack for old data that doesn&#039;t need to be quite as available. Generally data that is a few months old is moved from Haystack to Warm Blob.&lt;br /&gt;
** F4 reduce the space usage of Haystack from a replication factor of 3.6 to 2.8 or 2.1 using Reed Solomon coding and XOR coding respectively but still provides consistency.&lt;br /&gt;
** Less robust to data center failures as a result.&lt;br /&gt;
*Reed Solomon coding basically use(10,4) which means 10 data and 4 parity blocks in a stripe, and can thus tolerate losing up to 4 blocks which means it can tolerate 4 rack failure and use 1.4 expansion factor.Two copies of this would be 2* 1.4= 2.8 effective replication factor.&lt;br /&gt;
*XOR coding use(2,1) across three data center and use 1.5 expansion factor which gives 1.5*1.4= 2.1 effective replication factor.&lt;br /&gt;
*The caching mechanism provides the reduction in load on storage system and it makes BLOB scaleable.&lt;br /&gt;
&lt;br /&gt;
=Sapphire=&lt;br /&gt;
*Represents a building block towards building this global distributed systems. The main critique to it is that it didn’t present a specific use case upon which their design is built upon.&lt;br /&gt;
*Sapphire does not show their scalability boundaries. There is no such distributed system model that can be “one size fits all”, most probably it will break in some large scale distributed application.&lt;br /&gt;
*Reaching this global distributed system that address all the distributed OS use cases will be a cumulative work of many big bodies and building it block by block and then this system will evolve by putting all these different building blocks together. In other words, reaching a global distributed system will come from a “bottom up not top down approach” [Somayaji, 2015].&lt;br /&gt;
*The concept of separate application logic from deployment logic helps programmers in making a flexible system. The other important part that makes it as a scalable system was that it is object based and could be integrated with any object oriented language.&lt;/div&gt;</summary>
		<author><name>Jasons</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20208</id>
		<title>DistOS 2015W Session 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20208"/>
		<updated>2015-04-20T14:13:20Z</updated>

		<summary type="html">&lt;p&gt;Jasons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [http://en.wikipedia.org/wiki/Distributed_hash_table Wikipedia&#039;s article on Distributed Hash Tables]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Kademlia Wikipedia&#039;s article on Kademlia]&lt;br /&gt;
* [http://pdos.csail.mit.edu/~petar/papers/maymounkov-kademlia-lncs.pdf Maymounkov and Mazieres, &amp;quot;Kademlia: A Peer-to-peer information system based on the XOR Metric&amp;quot; (2002)]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Tapestry_%28DHT%29 Wikipedia&#039;s article on Tapestry]&lt;br /&gt;
* [http://pdos.csail.mit.edu/~strib/docs/tapestry/tapestry_jsac03.pdf Zhao et al, &amp;quot;Tapestry: A Resilient Global-Scale Overlay for Service Deployment&amp;quot; (JSAC 2003)]&lt;br /&gt;
* [https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Geambasu.pdf Geambasu et al., &amp;quot;Comet: An active distributed key-value store&amp;quot; (OSDI 2010)]&lt;br /&gt;
&lt;br /&gt;
Session 10 is about Distributed Hash Tables. How they work, various algorithmic options (keyspace partitioning being a major example) and some of the earliest implementations.&lt;br /&gt;
&lt;br /&gt;
(Feel free to tweak the questions!)&lt;br /&gt;
&lt;br /&gt;
==Kademlia==&lt;br /&gt;
Members: Kirill, Deep, Jason, Hassan&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Why are DHTs relevant to distributed OSs?&#039;&#039;&#039;&lt;br /&gt;
** Using many system, having repetition&lt;br /&gt;
** DHT to distributed content over  multiple nodes&lt;br /&gt;
** Decentralized therefore peer-to-peer&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;How is content divided?&#039;&#039;&#039;&lt;br /&gt;
** File hashes&lt;br /&gt;
** Node ID to locate the value&lt;br /&gt;
** 160 bit key space, binary tree for partition and searching down the tree&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;How is the network traversed?&#039;&#039;&#039;&lt;br /&gt;
** Highest prefix and increase the number of digit it used to get close to the target nodes&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;What trust assumptions does the system make?&#039;&#039;&#039;&lt;br /&gt;
** DHT by itself is insecure&lt;br /&gt;
** The academic and practitioner communities have realized that all current DHT designs suffer from a security weakness, known as the Sybil attack&lt;br /&gt;
** K-buckets&lt;br /&gt;
*** Binary tree with each node having k-buckets as leaf&lt;br /&gt;
*** Any given k-node is very unlikely to fail within an hour of each other&lt;br /&gt;
*** New nodes are only inserted when there is room in the bucket or the oldest node doesn&#039;t respond&lt;br /&gt;
** Uses UDP therefore packets are often lost&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Performance constraints?&#039;&#039;&#039;&lt;br /&gt;
** Binary tree traversing, therefore traversing is maximum O(log n)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Kademlia for this purpose?&#039;&#039;&#039;&lt;br /&gt;
** DNS&lt;br /&gt;
** Any kind of meta-data service&lt;br /&gt;
&lt;br /&gt;
==Comet==&lt;br /&gt;
Members: Mohamed Ahmed, Apoorv Sangal, Ambalica Sharma&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
DHT is an infrastructure than enables many clients to share information, and scale to handle node arrival, departure and failure. DHT&#039;s serve many of the design goals of disbtributed operating systems. The paper states that &amp;quot;DHTs are increasingly used to support a variety of distributed applications, such as file-sharing, distributed resource tracking, end-system multicast, publish-subscribe systems, distributed search engines&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
One of the three main components of the comet system is a routing substrate which&lt;br /&gt;
implements the value/node mapping. This allows a client to find the node htat stores&lt;br /&gt;
a specific data item. Since Comet uses a DHT implementation, routing occurs by applying&lt;br /&gt;
a hash function to the key to compute node ID&#039;s that store the associated value.&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
Assumes clients re untrusted autonomous nodes. &lt;br /&gt;
&lt;br /&gt;
A client node running Comet should be protected from the execution of handlers &lt;br /&gt;
e.g. an executing handler cannot corrupt the node or use unlimited resources. &lt;br /&gt;
Handlers should not be able to mount messaging attacks on other nodes.&lt;br /&gt;
&lt;br /&gt;
Users downloading Comet must trust it and have guarantees about its behavior. For this reason, Comet enforces four important restrictions:&lt;br /&gt;
1. Limited knowledge: an ASO is not aware of other objects&lt;br /&gt;
or resources stored on the same node and has no&lt;br /&gt;
direct way to learn about them.&lt;br /&gt;
2. Limited access: an object handler can manipulate only its own value and cannot modify the values of other objects on its storage node.&lt;br /&gt;
3. Limited communication: an active storage object cannot&lt;br /&gt;
send arbitrary messages over the network.&lt;br /&gt;
4. Limited resource consumption: an ASO’s resource usage is strictly bounded, e.g., the system limits the amount of computation and memory it can consume.&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Comet for this purpose?&lt;br /&gt;
&lt;br /&gt;
==Tapestry==&lt;br /&gt;
Members: Ashley, Dany, Alexis, Khaled&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
&lt;br /&gt;
Because they provide a way to distribute information over large networks (distributed key/value store).&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
&lt;br /&gt;
Uses consistent hashing (SHA-1), upon node creation (join) creates optimal routing table.&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
You look at your neighbours, you see which neighbour is closest to your destination, and recurse.&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
&lt;br /&gt;
It assumes the system is entirely trustworthy from adversaries. While network failures may happen and nodes may go down, no node will explicitly try to mess with the network.&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
O(log n) access times to any given node. Best effort publishing/unpublishing via decentralized object location routing.&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Tapestry for this purpose?&lt;br /&gt;
&lt;br /&gt;
== DHT Discussion ==&lt;br /&gt;
* DHT is weak against mutation&lt;br /&gt;
* performance - high variance&lt;br /&gt;
** hop vs direct connection&lt;br /&gt;
*DHT as DNS server&lt;br /&gt;
**DHT have no ownership/authority&lt;br /&gt;
*DHT as web hosting server&lt;br /&gt;
** ok for static content but not good for dynamic content which might have private data&lt;br /&gt;
&lt;br /&gt;
== Other Resources == &lt;br /&gt;
*[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1610546&amp;amp;tag=1 A survey and comparison of peer-to-peer overlay network schemes]&lt;br /&gt;
*[http://link.springer.com/article/10.1007/s12083-012-0157-3 Collaborative Applications over Peer-to-Peer Systems – Challenges and Solutions]&lt;/div&gt;</summary>
		<author><name>Jasons</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_8&amp;diff=20207</id>
		<title>DistOS 2015W Session 8</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_8&amp;diff=20207"/>
		<updated>2015-04-20T10:24:56Z</updated>

		<summary type="html">&lt;p&gt;Jasons: /* WEB */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* The link to Vannevar Bush’s article, “As we may think” http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/ &lt;br /&gt;
&lt;br /&gt;
* How both the article and the video relates to the course? &lt;br /&gt;
The creation of the Web is basically what drove the need to have a way of connecting thousands of machines and develop a mechanism between these machines to share files and data in an efficient way. In other words, the Science of Distributed Operating Systems has evolved as a result of the creation of the Web and it’s exponential growth.&lt;br /&gt;
&lt;br /&gt;
*OS is made to run program&lt;br /&gt;
**example:&lt;br /&gt;
**Mac OS - GUI&lt;br /&gt;
**UNIX - text processing, shell pipe&lt;br /&gt;
&lt;br /&gt;
== WEB ==&lt;br /&gt;
&lt;br /&gt;
*Web is not tied to fixed interface&lt;br /&gt;
**interface is flexible&lt;br /&gt;
**different type of content, text, image, video, file, dynamic, static, are presented different way&lt;br /&gt;
**website have one to many viewers.&lt;br /&gt;
&lt;br /&gt;
*stateless&lt;br /&gt;
**allows scalability&lt;br /&gt;
**server does not keep state of client&lt;br /&gt;
**the disadvantage of stateful server is the cost of coordination between servers for state of clients&lt;/div&gt;</summary>
		<author><name>Jasons</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_8&amp;diff=20206</id>
		<title>DistOS 2015W Session 8</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_8&amp;diff=20206"/>
		<updated>2015-04-20T10:19:49Z</updated>

		<summary type="html">&lt;p&gt;Jasons: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* The link to Vannevar Bush’s article, “As we may think” http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/ &lt;br /&gt;
&lt;br /&gt;
* How both the article and the video relates to the course? &lt;br /&gt;
The creation of the Web is basically what drove the need to have a way of connecting thousands of machines and develop a mechanism between these machines to share files and data in an efficient way. In other words, the Science of Distributed Operating Systems has evolved as a result of the creation of the Web and it’s exponential growth.&lt;br /&gt;
&lt;br /&gt;
*OS is made to run program&lt;br /&gt;
**example:&lt;br /&gt;
**Mac OS - GUI&lt;br /&gt;
**UNIX - text processing, shell pipe&lt;br /&gt;
&lt;br /&gt;
== WEB ==&lt;br /&gt;
&lt;br /&gt;
*Web is not tied to fixed interface&lt;br /&gt;
**interface is flexible&lt;br /&gt;
**website have 1 to many viewers.&lt;br /&gt;
&lt;br /&gt;
*stateless&lt;br /&gt;
**allows scalability&lt;br /&gt;
**server does not keep state of client&lt;br /&gt;
**the disadvantage of stateful server is the cost of coordination between servers for state of clients&lt;/div&gt;</summary>
		<author><name>Jasons</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=20205</id>
		<title>DistOS 2015W Session 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=20205"/>
		<updated>2015-04-20T10:12:24Z</updated>

		<summary type="html">&lt;p&gt;Jasons: /* Key Features */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Ceph =&lt;br /&gt;
Unlike GFS that was talked about previously, this is a general purpose distributed file system (for various file size, from small to large). It follows the same general model of distribution as GFS and Amoeba.&lt;br /&gt;
&lt;br /&gt;
HOW IT WORKS&lt;br /&gt;
Ceph file system runs on top of the same object storage system that provides object storage and block device interfaces. The Ceph metadata server cluster provides a service that maps the directories and filenames of the file system to objects stored within RADOS clusters. The metadata server cluster can expand or contract, and it can rebalance the file system dynamically to distribute data evenly among cluster hosts. This ensures high performance and prevents heavy loads on specific hosts within the cluster.&lt;br /&gt;
&lt;br /&gt;
BENEFITS&lt;br /&gt;
The Ceph file system provides numerous benefits:&lt;br /&gt;
It provides stronger data safety for mission-critical applications.&lt;br /&gt;
It provides virtually unlimited storage to file systems.&lt;br /&gt;
Applications that use file systems can use Ceph FS with POSIX semantics. No integration or customization required!&lt;br /&gt;
Ceph automatically balances the file system to deliver maximum performance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Advantages of using File systems ==&lt;br /&gt;
*  It supports heterogeneous operating systems including all flavors of the unix operating system as well as Linux and windows&lt;br /&gt;
* Multiple client machines can access a single resource simultaneously.&lt;br /&gt;
enables sharing common application binaries and read only information instead of putting them on each single machine. This results in reduced overall disk storage cost and administration overhead.&lt;br /&gt;
*Gives access to uniform data to groups of users.&lt;br /&gt;
*Useful when many users exists on many systems with each user&#039;s home directory located on every single machine. Network file systems allows you to all users home directories on a single machine under /home&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Components ==&lt;br /&gt;
* Client&lt;br /&gt;
&lt;br /&gt;
* Cluster of Object Storage Devices (OSD)&lt;br /&gt;
** It basically stores data and metadata and clients communicate directly with it to perform IO operations&lt;br /&gt;
** Data is stored in objects (variable size chunks)&lt;br /&gt;
&lt;br /&gt;
* Meta-data Server (MDS)&lt;br /&gt;
** It is used to manage the file and directories. Clients interact with it to perform metadata operations like open, rename. It manages the capabilities of a client.&lt;br /&gt;
** Clients &#039;&#039;&#039;do not&#039;&#039;&#039; need to access MDSs to find where data is stored, improving scalability (more on that below)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
* Decoupled data and metadata&lt;br /&gt;
** good for scalability of various file size, from small to large&lt;br /&gt;
&lt;br /&gt;
* Dynamic Distributed Metadata Management&lt;br /&gt;
&lt;br /&gt;
** It distributes the metadata among multiple metadata servers using dynamic sub-tree partitioning, meaning folders that get used more often get their meta-data replicated to more servers, spreading the load. This happens completely automatically&lt;br /&gt;
&lt;br /&gt;
* Object based storage&lt;br /&gt;
** Uses cluster of OSDs to form a Reliable Autonomic Distributed Object-Store (RADOS) for Ceph failure detection and recovery&lt;br /&gt;
&lt;br /&gt;
* CRUSH (Controlled, Replicated, Under Scalable, Hashing)&lt;br /&gt;
** The hashing algorithm used to calculate the location of object instead of looking for them&lt;br /&gt;
** This significantly reduces the load on the MDSs because each client has enough information to independently determine where things should be located.&lt;br /&gt;
** Responsible for automatically moving data when ODSs are added or removed (can be simplified as &#039;&#039;location = CRUSH(filename) % num_servers&#039;&#039;)&lt;br /&gt;
** The CRUSH paper on Ceph’s website can be [http://ceph.com/papers/weil-crush-sc06.pdf viewed here]&lt;br /&gt;
&lt;br /&gt;
* RADOS (Reliable Autonomic Distributed Object-Store) is the object store for Ceph&lt;br /&gt;
&lt;br /&gt;
= Chubby =&lt;br /&gt;
It is a coarse grained lock service, made and used internally by Google, that serves multiple clients with small number of servers (chubby cell).&lt;br /&gt;
&lt;br /&gt;
Chubby was designed to be a lock service, a distributed system that clients could connect to and share access to small files. The servers providing the system are partitioned into a variety of cells and access for a particular file is managed through one elected master node in one cell. This master makes all decisions and informs the rest of the cell nodes of that decision. If the master fails, the other nodes elect a new master. The problem of asynchronous consensus is solved through the use of timeouts as a failure detector. To avoid the scaling problem of a single bottleneck, the number of cells can be increased with the cost of making some cells smaller.&lt;br /&gt;
&lt;br /&gt;
== System Components ==&lt;br /&gt;
* Chubby Cell&lt;br /&gt;
** Handles the actual locks&lt;br /&gt;
** Mainly consists of 5 servers known as replicas&lt;br /&gt;
** Consensus protocol ([https://en.wikipedia.org/wiki/Paxos_(computer_science) Paxos]) is used to elect the master from replicas&lt;br /&gt;
&lt;br /&gt;
* Client&lt;br /&gt;
** Used by programs to request and use locks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Features ==&lt;br /&gt;
* Implemented as a semi POSIX-compliant file-system with a 256KB file limit&lt;br /&gt;
** Permissions are for files only, not folders, thus breaking some compatibility&lt;br /&gt;
** Trivial to use by programs; just use the standard &#039;&#039;fopen()&#039;&#039; family of calls&lt;br /&gt;
&lt;br /&gt;
* Uses consensus algorithm (paxos) among a set of servers to agree on who is the master that is in charge of the metadata&lt;br /&gt;
&lt;br /&gt;
* Meant for locks that last hours or days, not seconds (thus, &amp;quot;coarse grained&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
* A master server can handle tens of thousands of simultaneous connections&lt;br /&gt;
** This can be further improved by using caching servers as most of the traffic are keep-alive messages&lt;br /&gt;
&lt;br /&gt;
* Used by GFS for electing master server&lt;br /&gt;
&lt;br /&gt;
* Also used by Google as a nameserver&lt;br /&gt;
&lt;br /&gt;
== Issues ==&lt;br /&gt;
* Due to the use of Paxos (the only algorithm for this problem), the chubby cell is limited to only 5 servers. While this limits fault-tolerance, in practice this is more than enough.&lt;br /&gt;
* Since it has a file-system client interface, many programmers tend to abuse the system and need education (even inside Google)&lt;br /&gt;
* Clients need to constantly ping Chubby to verify that they still exist. This is to ensure that a server disappearing while holding a lock does not indefinitely hold that lock.&lt;br /&gt;
* Clients must also consequently re-verify that they hold a lock that they think they hold because Chubby may have timed them out.&lt;/div&gt;</summary>
		<author><name>Jasons</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=20204</id>
		<title>DistOS 2015W Session 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=20204"/>
		<updated>2015-04-20T10:07:40Z</updated>

		<summary type="html">&lt;p&gt;Jasons: /* Ceph */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Ceph =&lt;br /&gt;
Unlike GFS that was talked about previously, this is a general purpose distributed file system (for various file size, from small to large). It follows the same general model of distribution as GFS and Amoeba.&lt;br /&gt;
&lt;br /&gt;
HOW IT WORKS&lt;br /&gt;
Ceph file system runs on top of the same object storage system that provides object storage and block device interfaces. The Ceph metadata server cluster provides a service that maps the directories and filenames of the file system to objects stored within RADOS clusters. The metadata server cluster can expand or contract, and it can rebalance the file system dynamically to distribute data evenly among cluster hosts. This ensures high performance and prevents heavy loads on specific hosts within the cluster.&lt;br /&gt;
&lt;br /&gt;
BENEFITS&lt;br /&gt;
The Ceph file system provides numerous benefits:&lt;br /&gt;
It provides stronger data safety for mission-critical applications.&lt;br /&gt;
It provides virtually unlimited storage to file systems.&lt;br /&gt;
Applications that use file systems can use Ceph FS with POSIX semantics. No integration or customization required!&lt;br /&gt;
Ceph automatically balances the file system to deliver maximum performance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Advantages of using File systems ==&lt;br /&gt;
*  It supports heterogeneous operating systems including all flavors of the unix operating system as well as Linux and windows&lt;br /&gt;
* Multiple client machines can access a single resource simultaneously.&lt;br /&gt;
enables sharing common application binaries and read only information instead of putting them on each single machine. This results in reduced overall disk storage cost and administration overhead.&lt;br /&gt;
*Gives access to uniform data to groups of users.&lt;br /&gt;
*Useful when many users exists on many systems with each user&#039;s home directory located on every single machine. Network file systems allows you to all users home directories on a single machine under /home&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Components ==&lt;br /&gt;
* Client&lt;br /&gt;
&lt;br /&gt;
* Cluster of Object Storage Devices (OSD)&lt;br /&gt;
** It basically stores data and metadata and clients communicate directly with it to perform IO operations&lt;br /&gt;
** Data is stored in objects (variable size chunks)&lt;br /&gt;
&lt;br /&gt;
* Meta-data Server (MDS)&lt;br /&gt;
** It is used to manage the file and directories. Clients interact with it to perform metadata operations like open, rename. It manages the capabilities of a client.&lt;br /&gt;
** Clients &#039;&#039;&#039;do not&#039;&#039;&#039; need to access MDSs to find where data is stored, improving scalability (more on that below)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
* Decoupled data and metadata&lt;br /&gt;
&lt;br /&gt;
* Dynamic Distributed Metadata Management&lt;br /&gt;
&lt;br /&gt;
** It distributes the metadata among multiple metadata servers using dynamic sub-tree partitioning, meaning folders that get used more often get their meta-data replicated to more servers, spreading the load. This happens completely automatically&lt;br /&gt;
&lt;br /&gt;
* Object based storage&lt;br /&gt;
** Uses cluster of OSDs to form a Reliable Autonomic Distributed Object-Store (RADOS) for Ceph failure detection and recovery&lt;br /&gt;
&lt;br /&gt;
* CRUSH (Controlled, Replicated, Under Scalable, Hashing)&lt;br /&gt;
** The hashing algorithm used to calculate the location of object instead of looking for them&lt;br /&gt;
** This significantly reduces the load on the MDSs because each client has enough information to independently determine where things should be located.&lt;br /&gt;
** Responsible for automatically moving data when ODSs are added or removed (can be simplified as &#039;&#039;location = CRUSH(filename) % num_servers&#039;&#039;)&lt;br /&gt;
** The CRUSH paper on Ceph’s website can be [http://ceph.com/papers/weil-crush-sc06.pdf viewed here]&lt;br /&gt;
&lt;br /&gt;
* RADOS (Reliable Autonomic Distributed Object-Store) is the object store for Ceph&lt;br /&gt;
&lt;br /&gt;
= Chubby =&lt;br /&gt;
It is a coarse grained lock service, made and used internally by Google, that serves multiple clients with small number of servers (chubby cell).&lt;br /&gt;
&lt;br /&gt;
Chubby was designed to be a lock service, a distributed system that clients could connect to and share access to small files. The servers providing the system are partitioned into a variety of cells and access for a particular file is managed through one elected master node in one cell. This master makes all decisions and informs the rest of the cell nodes of that decision. If the master fails, the other nodes elect a new master. The problem of asynchronous consensus is solved through the use of timeouts as a failure detector. To avoid the scaling problem of a single bottleneck, the number of cells can be increased with the cost of making some cells smaller.&lt;br /&gt;
&lt;br /&gt;
== System Components ==&lt;br /&gt;
* Chubby Cell&lt;br /&gt;
** Handles the actual locks&lt;br /&gt;
** Mainly consists of 5 servers known as replicas&lt;br /&gt;
** Consensus protocol ([https://en.wikipedia.org/wiki/Paxos_(computer_science) Paxos]) is used to elect the master from replicas&lt;br /&gt;
&lt;br /&gt;
* Client&lt;br /&gt;
** Used by programs to request and use locks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Features ==&lt;br /&gt;
* Implemented as a semi POSIX-compliant file-system with a 256KB file limit&lt;br /&gt;
** Permissions are for files only, not folders, thus breaking some compatibility&lt;br /&gt;
** Trivial to use by programs; just use the standard &#039;&#039;fopen()&#039;&#039; family of calls&lt;br /&gt;
&lt;br /&gt;
* Uses consensus algorithm (paxos) among a set of servers to agree on who is the master that is in charge of the metadata&lt;br /&gt;
&lt;br /&gt;
* Meant for locks that last hours or days, not seconds (thus, &amp;quot;coarse grained&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
* A master server can handle tens of thousands of simultaneous connections&lt;br /&gt;
** This can be further improved by using caching servers as most of the traffic are keep-alive messages&lt;br /&gt;
&lt;br /&gt;
* Used by GFS for electing master server&lt;br /&gt;
&lt;br /&gt;
* Also used by Google as a nameserver&lt;br /&gt;
&lt;br /&gt;
== Issues ==&lt;br /&gt;
* Due to the use of Paxos (the only algorithm for this problem), the chubby cell is limited to only 5 servers. While this limits fault-tolerance, in practice this is more than enough.&lt;br /&gt;
* Since it has a file-system client interface, many programmers tend to abuse the system and need education (even inside Google)&lt;br /&gt;
* Clients need to constantly ping Chubby to verify that they still exist. This is to ensure that a server disappearing while holding a lock does not indefinitely hold that lock.&lt;br /&gt;
* Clients must also consequently re-verify that they hold a lock that they think they hold because Chubby may have timed them out.&lt;/div&gt;</summary>
		<author><name>Jasons</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=20203</id>
		<title>DistOS 2015W Session 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=20203"/>
		<updated>2015-04-20T10:03:57Z</updated>

		<summary type="html">&lt;p&gt;Jasons: /* Google File System */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= The Clouds Distributed Operating System =&lt;br /&gt;
It is a distributed OS running on a set of computers that are interconnected by a group of network. It basically unifies different computers into a single component.&lt;br /&gt;
&lt;br /&gt;
The OS is based on 2 patterns:&lt;br /&gt;
* Message Based OS&lt;br /&gt;
* Object Based  OS&lt;br /&gt;
&lt;br /&gt;
== Object Thread Model ==&lt;br /&gt;
&lt;br /&gt;
The structure of this is based on object thread model. It has set of objects which are defined by the class. Objects respond to messages. Sending message to object causes object to execute the method and then reply back. &lt;br /&gt;
&lt;br /&gt;
The system has &#039;&#039;&#039;active objects&#039;&#039;&#039; and &#039;&#039;&#039;passive objects&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
# Active objects are the objects which have one or more processes associated with them and further they can communicate with the external environment. &lt;br /&gt;
# Passive objects are those that currently do not have an active thread executing in them.&lt;br /&gt;
&lt;br /&gt;
The content of the Clouds data is long lived. Since the memory is implemented as a single-level store, the data exists forever and can survive system crashes and shut downs.&lt;br /&gt;
&lt;br /&gt;
== Threads ==&lt;br /&gt;
&lt;br /&gt;
The threads are the logical path of execution that traverse objects and executes code in them. The Clouds thread is not bound to a single address space. Several threads can enter an object simultaneously and execute concurrently. The nature of the Clouds object prohibits a thread from accessing any data outside the current address space in which it is executing.&lt;br /&gt;
&lt;br /&gt;
== Interaction Between Objects and Threads ==&lt;br /&gt;
&lt;br /&gt;
# Inter object interfaces are procedural&lt;br /&gt;
# Invocations work across machine boundaries&lt;br /&gt;
# Objects in clouds unify concept of persistent storage and memory to create address space, thus making the programming simpler.&lt;br /&gt;
# Control flow achieved by threads invoking objects.&lt;br /&gt;
&lt;br /&gt;
== Clouds Environment ==&lt;br /&gt;
&lt;br /&gt;
# Integrates set of homogeneous machines into one seamless environment&lt;br /&gt;
# There are three logical categories of machines- Compute Server, User Workstation and Data server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Plan 9 =&lt;br /&gt;
&lt;br /&gt;
Plan 9 is a general purpose, multi-user and mobile computing environment physically distributed across machines. Development of the system began in the late 1980s. The system was built at Bell Labs - the birth place of Unix. The original Unix OS had no support for networking, and there were many attempts over the years by others to create distributed systems with Unix compatibility. Plan 9, however, is distributed systems done following the original Unix philosophy.&lt;br /&gt;
&lt;br /&gt;
The goals of this system were:&lt;br /&gt;
# To build a distributed system that can be centrally administered.&lt;br /&gt;
# To be cost effective using cheap, modern microcomputers. &lt;br /&gt;
&lt;br /&gt;
The distribution itself is transparent to most programs. This is made possible by two properties:&lt;br /&gt;
# A per process group namespace.&lt;br /&gt;
# Uniform access to most resources by representing them as a file.&lt;br /&gt;
&lt;br /&gt;
== Unix Compatibility ==&lt;br /&gt;
&lt;br /&gt;
The commands, libraries and system calls are similar to that of Unix and therefore a casual user cannot distinguish between these two. The problems in UNIX were too deep to fix but still the various ideas were brought along. The problems addressed badly by UNIX were improved. Old tools were dropped and others were polished and reused.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Similaritieis with the UNIX ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;shell&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Various C compilers&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Unique Features ==&lt;br /&gt;
&lt;br /&gt;
What actually distinguishes Plan 9 is its &#039;&#039;&#039;organization&#039;&#039;&#039;. Plan 9 is divided along the lines of service function.&lt;br /&gt;
* CPU services and terminals use same kernel.&lt;br /&gt;
* Users may choose to run programs locally or remotely on CPU servers.&lt;br /&gt;
* It lets the user choose whether they want a distributed or centralized system.&lt;br /&gt;
&lt;br /&gt;
The design of Plan 9 is based on 3 principles:&lt;br /&gt;
# Resources are named and accessed like files in hierarchical file system.&lt;br /&gt;
# Standard protocol 9P.&lt;br /&gt;
# Disjoint hierarchical provided by different services are joined together into single private hierarchical file name space.&lt;br /&gt;
&lt;br /&gt;
=== Virtual Namespaces ===&lt;br /&gt;
&lt;br /&gt;
In a virtual namespace, a user boots a terminal or connects to a CPU server and then a new process group is created. Processes in group can either add to or rearrange their name space using two system calls - mount and bind.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Mount&#039;&#039;&#039; is used to attach new file system to a point in name space.&lt;br /&gt;
* &#039;&#039;&#039;Bind&#039;&#039;&#039; is used to attach a kernel resident (existing, mounted) file system to name space and also arrange pieces of name space.&lt;br /&gt;
* There is also &#039;&#039;&#039;unbind&#039;&#039;&#039; which undoes the effects of the other two calls.&lt;br /&gt;
&lt;br /&gt;
Namespaces in Plan 9 are on a per-process basis. While everything had a way reference resources with a unique name, using mount and bind every process could build a custom namespace as they saw fit.&lt;br /&gt;
&lt;br /&gt;
Since most resources are in the form of files (and folders), the term &#039;&#039;namespace&#039;&#039; really only refers to the filesystem layout.&lt;br /&gt;
&lt;br /&gt;
=== Parallel Programming ===&lt;br /&gt;
Parallel programming was supported in two ways:&lt;br /&gt;
* Kernel provides simple process model and carefully designed system calls for synchronization.&lt;br /&gt;
* Programming language supports concurrent programming.&lt;br /&gt;
&lt;br /&gt;
== Legacy ==&lt;br /&gt;
&lt;br /&gt;
Even though Plan 9 is no longer developed, the good ideas from the system still exist today. For example, the &#039;&#039;/proc&#039;&#039; virtual filesystem which displays current process information in the form of files exists in modern Linux kernels.&lt;br /&gt;
&lt;br /&gt;
= Google File System =&lt;br /&gt;
&lt;br /&gt;
It is scalable, distributed file system for large, data intensive applications. It is crafted to Google&#039;s unique needs as a search engine company.&lt;br /&gt;
&lt;br /&gt;
Unlike most filesystems, GFS must be implemented by individual applications and is not part of the kernel. While this introduces some technical overhead, it gives the system more freedom to implement or not implement certain non-standard features.&lt;br /&gt;
&lt;br /&gt;
Link to an explanation on how GFS works&lt;br /&gt;
[http://computer.howstuffworks.com/internet/basics/google-file-system1.htm]&lt;br /&gt;
&lt;br /&gt;
== Architecture ==&lt;br /&gt;
&lt;br /&gt;
The architecture of the Google file system consists of a single master, multiple chunk-servers and multiple clients. Chunk servers are used to store the data in uniformly sized chunks. Each chunk is identified by globally unique 64 bit handle assigned by master at the time of creation. The chunks are split into 64KB blocks, each with its own hashsum for data integrity checks. The chunks are replicated between servers, 3 by default. The master maintains all the file system metadata which includes the namespace and chunk location.&lt;br /&gt;
&lt;br /&gt;
Each of the chunks is 64 MB large (contrast this to the typical filesystem sectors of 512 or 4096 bytes) as the system is meant to hold enormous amount of data - namely the internet. The large chunk size is also important for the scalability of the system - the larger the chunk size, the less metadata the master server has to store for any given amount of data. With the current size, the master server is able to store the entirety of the metadata in memory, increasing performance by a significant margin.&lt;br /&gt;
&lt;br /&gt;
== Operation ==&lt;br /&gt;
&lt;br /&gt;
Master and Chunk server communication consists of&lt;br /&gt;
# checking whether there any chunk-server is down&lt;br /&gt;
# checking if any file is corrupted&lt;br /&gt;
# deleting stale chunks&lt;br /&gt;
&lt;br /&gt;
When a client wants to do some operations on the chunks&lt;br /&gt;
# it first asks the master server for the list of servers that store the parts of a file it wants to access&lt;br /&gt;
# it receives a list of chunk servers, with multiple servers for each chunk&lt;br /&gt;
# it finally communicates with the the chunk servers to perform the operation&lt;br /&gt;
&lt;br /&gt;
The system is geared towards appends and sequential reads. This is why the master server responds with multiple server addresses for each chunk - the client can then request a small piece from each server, increasing the data throughput linearly with the number of servers. Writes, in general, are in the form of a special &#039;&#039;append&#039;&#039; system call. When appending, there is no chance that two clients will want to write to the same location at the same time. This helps avoid any potential synchronization issues. If there are multiple appends to the same file at the same time, the chunk servers are free to order them as they wish (chunks on each server are not guaranteed to be byte-for-byte identical). Changes may also be applied multiple times. These issues are left for the application using GFS to resolve themselves. While a problem in the general sense, this is good enough for Google&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
&lt;br /&gt;
GFS is built with failure in mind. The system expects that at any time, there is some server or disk that is malfunctioning. The system deals with the failures as follows.&lt;br /&gt;
&lt;br /&gt;
=== Chunk Servers ===&lt;br /&gt;
&lt;br /&gt;
By default, chunks are replicated to three servers. This exact number depends on the application in question doing the write. When a chunk server finds that some of its data is corrupt, it grabs the data from other servers to repair itself{{Citation needed}}.&lt;br /&gt;
&lt;br /&gt;
=== Master Server ===&lt;br /&gt;
&lt;br /&gt;
For efficiency, there is only a single live master server at a time. While not making the system completely distributed, it avoid many synchronization problems and suits Google&#039;s needs. At any point in time, there are multiple read-only master servers that copy metadata from the currently live master. Should it go down, they will serve any read operations from the clients, until one of the hot spares is promoted to being the new live master server.&lt;br /&gt;
&lt;br /&gt;
=== Server:Stateless ===&lt;br /&gt;
*the servers does not store states about clients&lt;br /&gt;
*no caching at client either&lt;br /&gt;
**since most program only cares about the output&lt;br /&gt;
**if client wants up-to-date result, rerun the program&lt;br /&gt;
*use heartbeat messages to monitor servers&lt;br /&gt;
**good for system with assumption that changes (or failures) are often&lt;/div&gt;</summary>
		<author><name>Jasons</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_4&amp;diff=20202</id>
		<title>DistOS 2015W Session 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_4&amp;diff=20202"/>
		<updated>2015-04-20T09:51:01Z</updated>

		<summary type="html">&lt;p&gt;Jasons: /* Andrew File System */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Andrew File System =&lt;br /&gt;
AFS (Andrew File System) was set up as a direct response to NFS. Essentially universities found issues when they tried to scale NFS in a way that would allow them to share files amongst their staff effectively. AFS was more scalable than NFS because read-write operations happened locally before they were committed to the server (data store).&lt;br /&gt;
&lt;br /&gt;
Since AFS copies files locally when they were opened and only sends the data back when they are closed, all operations between opening and closing the file are very fast and do not need to access the network. NFS works with files remotely, so there is no data to transfer when opening/closing the file, making those operations instant.&lt;br /&gt;
&lt;br /&gt;
There are several problems with this design, however:&lt;br /&gt;
* The local system must have enough space to temporarily store the file.&lt;br /&gt;
* Opening and closing the files requires a lot of bandwidth for large files. To read even a single byte, the entire file must be retrieved (later versions remedied this).&lt;br /&gt;
* If the close operation fails, the system will not have the updated version of the file. Many programs are designed around local filesystems, and therefore don&#039;t even check the return value of the close operation (as this is unlikely to fail on a local FS), giving users the false impression that everything went well.&lt;br /&gt;
&lt;br /&gt;
Given all this, AFS was suitable for working with small files, not large ones, limiting its usefulness. It is also notoriously annoying to set up as it is geared towards university-sized networks, further limiting its success.&lt;br /&gt;
&lt;br /&gt;
*Kerberos protocol&lt;br /&gt;
**authentication protocol using time based ticket&lt;br /&gt;
**single sign on system to authenticate and use other services&lt;br /&gt;
**AFS uses Kerberos for authentication, and implements access control lists on directories for users and groups.&lt;br /&gt;
&lt;br /&gt;
= Amoeba Operating System =&lt;br /&gt;
&lt;br /&gt;
=== Capablities: ===&lt;br /&gt;
* Pointer to the object&lt;br /&gt;
* Capability assigning right to perform to some operation to the object ticket &lt;br /&gt;
* Communicate wide area network &lt;br /&gt;
* a kind of ticket or key that allows the holder of the capa- bility to perform some (not neces- sarily all) &lt;br /&gt;
* Each user process owns some collection of capabilities, which together define the set of objects it may access and the types of operations that my ne performed on each &lt;br /&gt;
* After the server has performed the operation, it sends back a reply message that unblocks the client &lt;br /&gt;
* Sending messages, blocking and accepting forms the remote procedure call that can be encapsulate using to make entire remote operation look like local procedure &lt;br /&gt;
* Second field:  used by the sever to identify which of its objects is being addressed server port and object number identify object which operation to performed  &lt;br /&gt;
* Generates 48-bit random number     &lt;br /&gt;
* The third field is the right field which contains a bit map telling which operation the holder of the capability  may performed&lt;br /&gt;
* X11 Window management&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Thread Management: ===&lt;br /&gt;
* Same process have multiple thread and each process has its own registered counter and stack &lt;br /&gt;
* Behave like process&lt;br /&gt;
* It can synchronized using mutex semaphore &lt;br /&gt;
* File: Multiple thread, &lt;br /&gt;
* Blocked when there&#039;s multiple threads &lt;br /&gt;
* Buttlet thread the mutex&lt;br /&gt;
* The careful reader may have noticed that user process can pull 813kbytes/sec&lt;br /&gt;
&lt;br /&gt;
= Unique features =&lt;br /&gt;
&lt;br /&gt;
== Pool processors ==&lt;br /&gt;
Pool processors are group of CPUs that are dynamically allocated as user needs. When a program is executed, any of the available processor run.&lt;br /&gt;
&lt;br /&gt;
== Supported architectures ==&lt;br /&gt;
Many different processor architectures are supported including:&lt;br /&gt;
* i80386 (Pentium)&lt;br /&gt;
* 68K&lt;br /&gt;
* SPARC&lt;br /&gt;
&lt;br /&gt;
= The V Distributed System = &lt;br /&gt;
&lt;br /&gt;
* First tent in V design: High Performance communication is the most critical facility for distributed systems.&lt;br /&gt;
* Second; The protocols, not the software, define the system.&lt;br /&gt;
* Third; a relatively small operating system kernel can implement the basic protocols and services providing a simple network-transparent process, address space &amp;amp; communication model.&lt;br /&gt;
&lt;br /&gt;
=== Ideas that significantly affected the design ===&lt;br /&gt;
* Shared Memory.&lt;br /&gt;
* Dealing with group of entities same as they deal w/individual entities.&lt;br /&gt;
* Efficient file caching mechanism using the virtual memory caching mechanism.&lt;br /&gt;
&lt;br /&gt;
=== Design Decisions ===&lt;br /&gt;
* Designed for a cluster of workstations with high speed network access ( only really supports LAN ).&lt;br /&gt;
* Abstract the physical architecture of the participating workstations, by defining common protocols providing well-defined interfaces.&lt;br /&gt;
&lt;br /&gt;
V was run on LAN, and its developers developed a really fast IPC protocol which allowed for it to be a fasted distributed operating system in a small geographic area. Aside from the IPC protocols, V also implemented RPC calls in the background.&lt;br /&gt;
&lt;br /&gt;
V uses the strong consistency model. This model can cause issues with concurrency because in V files are a memory space. Thus two different users accessing the same file and in fact accessing the same memory location. This could result in issues unless there is an effective implementation to deal with multiple versions, etc.&lt;br /&gt;
&lt;br /&gt;
VMTP protocol was used for communication. It supports request-respond behavior. Besides, it provides transparency, group communication facility and flow control. It is pretty much like TCP.&lt;/div&gt;</summary>
		<author><name>Jasons</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20201</id>
		<title>DistOS 2015W Session 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20201"/>
		<updated>2015-04-20T09:43:44Z</updated>

		<summary type="html">&lt;p&gt;Jasons: /* Locus */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Multics =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Sameer, Shivjot, Ambalica, Veena&lt;br /&gt;
&lt;br /&gt;
It came into being in the 1960s and it completely vanished in 2000s. It was started by Bell, General Electric and MIT but Bell backed out of the project in 1969.&lt;br /&gt;
Multics is a time sharing OS which provides multitasking and multiprogramming. It is not a distributed OS but it a centralized system which was written in machine-specific assembly.&lt;br /&gt;
&lt;br /&gt;
It provides the following features:&lt;br /&gt;
# Utility Computing&lt;br /&gt;
# Access Control Lists&lt;br /&gt;
# Single level storage&lt;br /&gt;
# Dynamic linking&lt;br /&gt;
#* Sharded libraries or files can be loaded and linked to Random Access Memory at run time&lt;br /&gt;
# Hot swapping&lt;br /&gt;
# Multiprocessing System&lt;br /&gt;
# Ring oriented Security&lt;br /&gt;
#* It provides number of levels of authorization within the computer system&lt;br /&gt;
#* Still present in some form today, inside both processors (like x86) and operating systems&lt;br /&gt;
&lt;br /&gt;
= Unix =&lt;br /&gt;
&lt;br /&gt;
Unix in its original conception is a small, minimal API system designed by two guys from Bell Labs. It was essentially an OS that was optimized for the needs of programmers but not much beyond that. The UNIX OS ran on one computer, and terminals ran from that one computer. Thus it is not a distributed operating system as it is centralized and implements time sharing. In fact, it didn&#039;t even have support for networking in the first version.&lt;br /&gt;
&lt;br /&gt;
The C language was created specifically for Unix, as the creators wanted to create a machine-agnostic language for the operating system.&lt;br /&gt;
&lt;br /&gt;
Most features from Unix are still available in present day Unix-based systems. For example, the shell, with its piping capabilities, is still used today in its original form.&lt;br /&gt;
&lt;br /&gt;
= NFS =&lt;br /&gt;
&lt;br /&gt;
NFS is a protocol for working with distributed file systems transparently using RPC. These connections are not secure. Sun wanted to encrypt these RPC connections but encryption would result in government regulations that Sun wanted to avoid in order to sell NFS over seas.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Sun wanted to secure their NFS with encryption, but at the time encryption was regulated like munitions in the United States. Exporting any product that had encryption was impossible, but Sun needed those sales abroad. To avoid these regulations, Sun decided to sell the insecure NFS version of the system.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Mert&lt;br /&gt;
&lt;br /&gt;
*Sun used &amp;quot;open protocol&amp;quot; approach to develop NFS&lt;br /&gt;
** Simply specified the exact message formats that clients and servers would use to communicate.&lt;br /&gt;
** Different groups could develop their own NFS servers and thus compete in an NFS marketplace.&lt;br /&gt;
** ex. Sun, NetApp, EMC, IBM, etc.&lt;br /&gt;
&lt;br /&gt;
*NFSv2, simple and fast server crash recovery&lt;br /&gt;
**Any minute that the server is down (or unavailable) makes all the clients unproductive.&lt;br /&gt;
**The protocol is designed to deliver in each protocol request all the information that is needed in order to complete the request.&lt;br /&gt;
**stateless approach: the server does not track anything about what clients are doing&lt;br /&gt;
**no fancy crash recovery is needed, the server just starts running again, and a client, at worst, might have to retry a request.&lt;br /&gt;
&lt;br /&gt;
= Locus =&lt;br /&gt;
&lt;br /&gt;
# Not scalable&lt;br /&gt;
#* The synchronization algorithms were so slow, they only managed to run it on five computers&lt;br /&gt;
#* Every computer stores a copy of every file&lt;br /&gt;
#* Also used CAS to manage files&lt;br /&gt;
# Not efficient with abstractions&lt;br /&gt;
#* Trying to distribute files and processes&lt;br /&gt;
#Allowed for process migration&lt;br /&gt;
#Transparency&lt;br /&gt;
#* It provided network transparency  to “disguise” its distributed context.&lt;br /&gt;
#Dynamic reconfiguration. (It adapts topology changes.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Locus has lots of similarities with the today&#039;s systems. It uses replication and partitioning which are employed in cloud and distributed systems.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Eric&lt;br /&gt;
&lt;br /&gt;
*LOCUS was capable of distribute file and processes among the nodes but not the computing power.&lt;br /&gt;
&lt;br /&gt;
= Sprite =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Team&#039;&#039;&#039;: Jamie, Hassan, Khaled&lt;br /&gt;
&lt;br /&gt;
Sprite had the following Design Features:&lt;br /&gt;
# Network Transparency&lt;br /&gt;
# Process Migration, file transfer between computers&lt;br /&gt;
#* User could initiate a process migration to an idle machine, and if the machine was no longer idle; due to it being used by another user, the system would take care of the process migration to another machine&lt;br /&gt;
# Handling Cache Consistency&lt;br /&gt;
#* Sequential file sharing ==&amp;gt; By using a version number for each file&lt;br /&gt;
#* Concurrent write sharing ==&amp;gt; Disable cache to clients, enable write-blocking and other methods&lt;br /&gt;
# Implemented a caching system that sped up performance&lt;br /&gt;
# Implemented a log structured file system&lt;br /&gt;
#* They realized that with increasing amounts of RAM in computers which can be used for caching, writes to the disk were the main bottleneck, not reads.&lt;br /&gt;
#* Log structured file-systems are optimized for writes, as changes to previous data are appended at the current position.&lt;br /&gt;
#* This allows for very fast, sequential writes.&lt;br /&gt;
#* Example: SSD (Solid-state disks)&lt;br /&gt;
&lt;br /&gt;
The main features to take away from the Sprite system is that it implemented a log structured file system, and implemented caching to increase performance.&lt;/div&gt;</summary>
		<author><name>Jasons</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20200</id>
		<title>DistOS 2015W Session 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20200"/>
		<updated>2015-04-20T09:39:51Z</updated>

		<summary type="html">&lt;p&gt;Jasons: /* NFS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Multics =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Sameer, Shivjot, Ambalica, Veena&lt;br /&gt;
&lt;br /&gt;
It came into being in the 1960s and it completely vanished in 2000s. It was started by Bell, General Electric and MIT but Bell backed out of the project in 1969.&lt;br /&gt;
Multics is a time sharing OS which provides multitasking and multiprogramming. It is not a distributed OS but it a centralized system which was written in machine-specific assembly.&lt;br /&gt;
&lt;br /&gt;
It provides the following features:&lt;br /&gt;
# Utility Computing&lt;br /&gt;
# Access Control Lists&lt;br /&gt;
# Single level storage&lt;br /&gt;
# Dynamic linking&lt;br /&gt;
#* Sharded libraries or files can be loaded and linked to Random Access Memory at run time&lt;br /&gt;
# Hot swapping&lt;br /&gt;
# Multiprocessing System&lt;br /&gt;
# Ring oriented Security&lt;br /&gt;
#* It provides number of levels of authorization within the computer system&lt;br /&gt;
#* Still present in some form today, inside both processors (like x86) and operating systems&lt;br /&gt;
&lt;br /&gt;
= Unix =&lt;br /&gt;
&lt;br /&gt;
Unix in its original conception is a small, minimal API system designed by two guys from Bell Labs. It was essentially an OS that was optimized for the needs of programmers but not much beyond that. The UNIX OS ran on one computer, and terminals ran from that one computer. Thus it is not a distributed operating system as it is centralized and implements time sharing. In fact, it didn&#039;t even have support for networking in the first version.&lt;br /&gt;
&lt;br /&gt;
The C language was created specifically for Unix, as the creators wanted to create a machine-agnostic language for the operating system.&lt;br /&gt;
&lt;br /&gt;
Most features from Unix are still available in present day Unix-based systems. For example, the shell, with its piping capabilities, is still used today in its original form.&lt;br /&gt;
&lt;br /&gt;
= NFS =&lt;br /&gt;
&lt;br /&gt;
NFS is a protocol for working with distributed file systems transparently using RPC. These connections are not secure. Sun wanted to encrypt these RPC connections but encryption would result in government regulations that Sun wanted to avoid in order to sell NFS over seas.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Sun wanted to secure their NFS with encryption, but at the time encryption was regulated like munitions in the United States. Exporting any product that had encryption was impossible, but Sun needed those sales abroad. To avoid these regulations, Sun decided to sell the insecure NFS version of the system.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Mert&lt;br /&gt;
&lt;br /&gt;
*Sun used &amp;quot;open protocol&amp;quot; approach to develop NFS&lt;br /&gt;
** Simply specified the exact message formats that clients and servers would use to communicate.&lt;br /&gt;
** Different groups could develop their own NFS servers and thus compete in an NFS marketplace.&lt;br /&gt;
** ex. Sun, NetApp, EMC, IBM, etc.&lt;br /&gt;
&lt;br /&gt;
*NFSv2, simple and fast server crash recovery&lt;br /&gt;
**Any minute that the server is down (or unavailable) makes all the clients unproductive.&lt;br /&gt;
**The protocol is designed to deliver in each protocol request all the information that is needed in order to complete the request.&lt;br /&gt;
**stateless approach: the server does not track anything about what clients are doing&lt;br /&gt;
**no fancy crash recovery is needed, the server just starts running again, and a client, at worst, might have to retry a request.&lt;br /&gt;
&lt;br /&gt;
= Locus =&lt;br /&gt;
&lt;br /&gt;
# Not scalable&lt;br /&gt;
#* The synchronization algorithms were so slow, they only managed to run it on five computers&lt;br /&gt;
#* Every computer stores a copy of every file&lt;br /&gt;
#* Also used CAS to manage files&lt;br /&gt;
# Not efficient with abstractions&lt;br /&gt;
#* Trying to distribute files and processes&lt;br /&gt;
#Allowed for process migration&lt;br /&gt;
#Transparency&lt;br /&gt;
#* It provided network transparency  to “disguise” its distributed context.&lt;br /&gt;
#Dynamic reconfiguration. (It adapts topology changes.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Locus has lots of similarities with the today&#039;s systems. It uses replication and partitioning which are employed in cloud and distributed systems.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Eric&lt;br /&gt;
&lt;br /&gt;
= Sprite =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Team&#039;&#039;&#039;: Jamie, Hassan, Khaled&lt;br /&gt;
&lt;br /&gt;
Sprite had the following Design Features:&lt;br /&gt;
# Network Transparency&lt;br /&gt;
# Process Migration, file transfer between computers&lt;br /&gt;
#* User could initiate a process migration to an idle machine, and if the machine was no longer idle; due to it being used by another user, the system would take care of the process migration to another machine&lt;br /&gt;
# Handling Cache Consistency&lt;br /&gt;
#* Sequential file sharing ==&amp;gt; By using a version number for each file&lt;br /&gt;
#* Concurrent write sharing ==&amp;gt; Disable cache to clients, enable write-blocking and other methods&lt;br /&gt;
# Implemented a caching system that sped up performance&lt;br /&gt;
# Implemented a log structured file system&lt;br /&gt;
#* They realized that with increasing amounts of RAM in computers which can be used for caching, writes to the disk were the main bottleneck, not reads.&lt;br /&gt;
#* Log structured file-systems are optimized for writes, as changes to previous data are appended at the current position.&lt;br /&gt;
#* This allows for very fast, sequential writes.&lt;br /&gt;
#* Example: SSD (Solid-state disks)&lt;br /&gt;
&lt;br /&gt;
The main features to take away from the Sprite system is that it implemented a log structured file system, and implemented caching to increase performance.&lt;/div&gt;</summary>
		<author><name>Jasons</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20199</id>
		<title>DistOS 2015W Session 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20199"/>
		<updated>2015-04-20T09:18:17Z</updated>

		<summary type="html">&lt;p&gt;Jasons: /* NFS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Multics =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Sameer, Shivjot, Ambalica, Veena&lt;br /&gt;
&lt;br /&gt;
It came into being in the 1960s and it completely vanished in 2000s. It was started by Bell, General Electric and MIT but Bell backed out of the project in 1969.&lt;br /&gt;
Multics is a time sharing OS which provides multitasking and multiprogramming. It is not a distributed OS but it a centralized system which was written in machine-specific assembly.&lt;br /&gt;
&lt;br /&gt;
It provides the following features:&lt;br /&gt;
# Utility Computing&lt;br /&gt;
# Access Control Lists&lt;br /&gt;
# Single level storage&lt;br /&gt;
# Dynamic linking&lt;br /&gt;
#* Sharded libraries or files can be loaded and linked to Random Access Memory at run time&lt;br /&gt;
# Hot swapping&lt;br /&gt;
# Multiprocessing System&lt;br /&gt;
# Ring oriented Security&lt;br /&gt;
#* It provides number of levels of authorization within the computer system&lt;br /&gt;
#* Still present in some form today, inside both processors (like x86) and operating systems&lt;br /&gt;
&lt;br /&gt;
= Unix =&lt;br /&gt;
&lt;br /&gt;
Unix in its original conception is a small, minimal API system designed by two guys from Bell Labs. It was essentially an OS that was optimized for the needs of programmers but not much beyond that. The UNIX OS ran on one computer, and terminals ran from that one computer. Thus it is not a distributed operating system as it is centralized and implements time sharing. In fact, it didn&#039;t even have support for networking in the first version.&lt;br /&gt;
&lt;br /&gt;
The C language was created specifically for Unix, as the creators wanted to create a machine-agnostic language for the operating system.&lt;br /&gt;
&lt;br /&gt;
Most features from Unix are still available in present day Unix-based systems. For example, the shell, with its piping capabilities, is still used today in its original form.&lt;br /&gt;
&lt;br /&gt;
= NFS =&lt;br /&gt;
&lt;br /&gt;
NFS is a protocol for working with distributed file systems transparently using RPC. These connections are not secure. Sun wanted to encrypt these RPC connections but encryption would result in government regulations that Sun wanted to avoid in order to sell NFS over seas.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Sun wanted to secure their NFS with encryption, but at the time encryption was regulated like munitions in the United States. Exporting any product that had encryption was impossible, but Sun needed those sales abroad. To avoid these regulations, Sun decided to sell the insecure NFS version of the system.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Mert&lt;br /&gt;
&lt;br /&gt;
*Sun used &amp;quot;open protocol&amp;quot; approach to develop NFS&lt;br /&gt;
** Simply specified the exact message formats that clients and servers would use to communicate.&lt;br /&gt;
** Different groups could develop their own NFS servers and thus compete in an NFS marketplace.&lt;br /&gt;
** ex. Sun, NetApp, EMC, IBM, etc.&lt;br /&gt;
&lt;br /&gt;
= Locus =&lt;br /&gt;
&lt;br /&gt;
# Not scalable&lt;br /&gt;
#* The synchronization algorithms were so slow, they only managed to run it on five computers&lt;br /&gt;
#* Every computer stores a copy of every file&lt;br /&gt;
#* Also used CAS to manage files&lt;br /&gt;
# Not efficient with abstractions&lt;br /&gt;
#* Trying to distribute files and processes&lt;br /&gt;
#Allowed for process migration&lt;br /&gt;
#Transparency&lt;br /&gt;
#* It provided network transparency  to “disguise” its distributed context.&lt;br /&gt;
#Dynamic reconfiguration. (It adapts topology changes.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Locus has lots of similarities with the today&#039;s systems. It uses replication and partitioning which are employed in cloud and distributed systems.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Eric&lt;br /&gt;
&lt;br /&gt;
= Sprite =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Team&#039;&#039;&#039;: Jamie, Hassan, Khaled&lt;br /&gt;
&lt;br /&gt;
Sprite had the following Design Features:&lt;br /&gt;
# Network Transparency&lt;br /&gt;
# Process Migration, file transfer between computers&lt;br /&gt;
#* User could initiate a process migration to an idle machine, and if the machine was no longer idle; due to it being used by another user, the system would take care of the process migration to another machine&lt;br /&gt;
# Handling Cache Consistency&lt;br /&gt;
#* Sequential file sharing ==&amp;gt; By using a version number for each file&lt;br /&gt;
#* Concurrent write sharing ==&amp;gt; Disable cache to clients, enable write-blocking and other methods&lt;br /&gt;
# Implemented a caching system that sped up performance&lt;br /&gt;
# Implemented a log structured file system&lt;br /&gt;
#* They realized that with increasing amounts of RAM in computers which can be used for caching, writes to the disk were the main bottleneck, not reads.&lt;br /&gt;
#* Log structured file-systems are optimized for writes, as changes to previous data are appended at the current position.&lt;br /&gt;
#* This allows for very fast, sequential writes.&lt;br /&gt;
#* Example: SSD (Solid-state disks)&lt;br /&gt;
&lt;br /&gt;
The main features to take away from the Sprite system is that it implemented a log structured file system, and implemented caching to increase performance.&lt;/div&gt;</summary>
		<author><name>Jasons</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_2&amp;diff=20198</id>
		<title>DistOS 2015W Session 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_2&amp;diff=20198"/>
		<updated>2015-04-20T09:07:56Z</updated>

		<summary type="html">&lt;p&gt;Jasons: /* Reading Response Discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Reading Response Discussion =&lt;br /&gt;
*In computing, time-sharing, is the sharing of a computing resource among many users by means of multiprogramming and multi-tasking. &lt;br /&gt;
**In the old days, computers were expensive and physically large, so one computer was shared among many users using terminals.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Mother of All Demos: ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Jason, Kirill, Sravan, Agustin, Hassan Ambalica, Apoorv, Khaled&lt;br /&gt;
&lt;br /&gt;
* 1968, lead by Doug Inglebard&lt;br /&gt;
* Initial public display of many modern technologies&lt;br /&gt;
* One computer with multiple remote terminals&lt;br /&gt;
* Video conferencing&lt;br /&gt;
* Computer mouse (and coined the term)&lt;br /&gt;
* Word processing, rudimentary copy and paste&lt;br /&gt;
* Dynamic file linking (hypertext)&lt;br /&gt;
* Revision control/version control/source control&lt;br /&gt;
* Collaborative real-time editor&lt;br /&gt;
** User privilege control in that user can provide read-only access, read-write privilege to file&lt;br /&gt;
* Corded keyboard&lt;br /&gt;
** Marco keyboard that allows messages quickly, while using mouse at same time&lt;br /&gt;
* Not really a distributed operating system, but great start because multiple users at different terminals could share same resources&lt;br /&gt;
&lt;br /&gt;
== Early Internet ==&lt;br /&gt;
&lt;br /&gt;
== Alto ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Tory, Veena, Sameer, Mert, Deep, Nameet, Moe&lt;br /&gt;
&lt;br /&gt;
* Initially developed around 1973 by Xerox, upgraded over next few years&lt;br /&gt;
* High speed network connectivity (3 Mbps @ 1km distance)&lt;br /&gt;
* Connected up to 256 machines&lt;br /&gt;
* Protocol similar to cross between UDP/TCP, before TCP invented&lt;br /&gt;
* Allowed sharing of printers&lt;br /&gt;
* Also allowed distribution of files across computers (redundancy/reliability)&lt;br /&gt;
* Sort of early cloud&lt;br /&gt;
* Allowed for remote debugging, store error logs&lt;br /&gt;
* Allowed machines to use processing power of others&lt;br /&gt;
* Much of the time it would be idling, which is amazing at a time when computers cost a fortune&lt;br /&gt;
&lt;br /&gt;
= Professor lead discussion 1 = &lt;br /&gt;
&lt;br /&gt;
A true distributed operating system does not actually exist, its more of a dream. &lt;br /&gt;
&lt;br /&gt;
You&#039;ll find in the history of trying to achieve this dream of a distributed operating system there has always been a road block as a result of a technical issue. People would come up with a solution to this technical issue but then they would find that some other technical issue would crop up. For example &#039;The Mother of All Demos&#039; tried to build a distributed operating system but they had no networking. They had to point a television camera at the computer monitor to be able to demo their concept. The early internet came along and dealt with the issue of no networking. However the early internet itself had other technical issues.&lt;br /&gt;
&lt;br /&gt;
A common buzz word during the early days of development was &#039;&#039;&#039;time sharing&#039;&#039;&#039;, which is an old term for multiple processes running simultaneously. However, at the time it referred to multiple users sharing the CPU cycles on a single computer. Today, a single user&#039;s many processes using a single CPU is much more common.&lt;br /&gt;
&lt;br /&gt;
= Discussion: Easy on one computer, hard on multiple computers =&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Instant Messaging&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|-&lt;br /&gt;
| Can&#039;t be out sync&lt;br /&gt;
| Synchronization issues&lt;br /&gt;
* Server-client modal&lt;br /&gt;
** If two or more servers they get requests same time, they have to figure out how to synchronize the data&lt;br /&gt;
* Peer-to-Peer&lt;br /&gt;
** Each peer has to figure out how to sync each message&lt;br /&gt;
|-&lt;br /&gt;
| Only one data store/source&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Mortal Kombat/Twitch Gaming&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
|&lt;br /&gt;
* Full control of all software&lt;br /&gt;
* All possible users are physically co-located&lt;br /&gt;
* Latency minimized&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|&lt;br /&gt;
* Games are fundamentally low-latency. Networking is fundamental high-latency.&lt;br /&gt;
* Input prediction doesn&#039;t work well on twitchy games.&lt;br /&gt;
* Have to handle lying or faulty clients.&lt;br /&gt;
* Have to handle users finding each other.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Photo Album&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Commonalities ===&lt;br /&gt;
* Synchronization&lt;br /&gt;
* Bandwidth&lt;br /&gt;
* Reliability/Fault Tolerance&lt;br /&gt;
* Interoperability&lt;br /&gt;
* Discovery&lt;br /&gt;
* Routing&lt;br /&gt;
** In modern system this issue has been mostly abstracted away&lt;br /&gt;
** Classic example would be Wireless Ad-hoc networking&lt;br /&gt;
&lt;br /&gt;
The above list is not hard on a single system because all the cores have equal access to all the resources. Also as a result of excellent engineering modern single systems can very effective deal with any errors that may result.&lt;/div&gt;</summary>
		<author><name>Jasons</name></author>
	</entry>
</feed>