<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Aimbot</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Aimbot"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Aimbot"/>
	<updated>2026-04-24T08:23:49Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_4&amp;diff=19912</id>
		<title>DistOS 2015W Session 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_4&amp;diff=19912"/>
		<updated>2015-02-28T17:01:23Z</updated>

		<summary type="html">&lt;p&gt;Aimbot: /* Supported architectures */ Changed HTML to wiki markup&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Andrew File System =&lt;br /&gt;
AFS (Andrew File System) was set up as a direct response to NFS. Essentially universities found issues when they tried to scale NFS in a way that would allow them to share files amongst their staff effectively. AFS was more scalable than NFS because read-write operations happened locally before they were committed to the server (data store).&lt;br /&gt;
&lt;br /&gt;
Since AFS copied files locally when they were opened and only sent the data back when they were closed, all operations during that time are very fast and do need the network. NFS works with files remotely, so there is no data to transfer when opening/closing the file, making those operations instant.&lt;br /&gt;
&lt;br /&gt;
There are several problems with this design, however.&lt;br /&gt;
* The local system must have enough space to temporarily store the file.&lt;br /&gt;
* Opening and closing the files requires a lot of bandwidth for large files. To read even a single byte, the entire file must be retrieved (later versions remedied this).&lt;br /&gt;
* If the close operation fails, the system will not have the updated version of the file. Many programs don&#039;t even check the return value of the close operation, giving users the false impression that everything went well.&lt;br /&gt;
&lt;br /&gt;
Given all this, AFS was suitable for working with small files, not large ones, limiting its usefulness. It is also notoriously annoying to set up as it is geared towards university-sized networks, further limiting its success.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Amoeba Operating System =&lt;br /&gt;
&lt;br /&gt;
=== Capablities: ===&lt;br /&gt;
* Pointer to the object&lt;br /&gt;
* Capability assigning right to perform to some operation to the object ticket &lt;br /&gt;
* Communicate wide area network &lt;br /&gt;
* a kind of ticket or key that allows the holder of the capa- bility to perform some (not neces- sarily all) &lt;br /&gt;
* Each user process owns some collection of capabilities, which together define the set of objects it may access and the types of operations that my ne performed on each &lt;br /&gt;
* After the server has performed the operation, it sends back a reply message that unblocks the client &lt;br /&gt;
* Sending messages, blocking and accepting forms the remote procedure call that can be encapsulate using to make entire remote operation look like local procedure &lt;br /&gt;
* Second field:  used by the sever to identify which of its objects is being addressed server port and object number identify object which operation to performed  &lt;br /&gt;
* Generates 48-bit random number     &lt;br /&gt;
* The third field is the right field which contains a bit map telling which operation the holder of the capability  may performed&lt;br /&gt;
* X11 Window management&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Thread Management: ===&lt;br /&gt;
* Same process have multiple thread and each process has its own registered counter and stack &lt;br /&gt;
* Behave like process&lt;br /&gt;
* It can synchronized using mutex semaphore &lt;br /&gt;
* File: Multiple thread, &lt;br /&gt;
* Blocked when there&#039;s multiple threads &lt;br /&gt;
* Buttlet thread the mutex&lt;br /&gt;
* The careful reader may have noticed that user process can pull 813kbytes/sec&lt;br /&gt;
&lt;br /&gt;
= Unique features =&lt;br /&gt;
&lt;br /&gt;
== Pool processors ==&lt;br /&gt;
Pool processors are group of CPUs that are dynamically allocated as user needs. When a program is executed, any of the available processor run.&lt;br /&gt;
&lt;br /&gt;
== Supported architectures ==&lt;br /&gt;
Many different processor architectures are supported including:&lt;br /&gt;
* i80386 (Pentium)&lt;br /&gt;
* 68K&lt;br /&gt;
* SPARC&lt;br /&gt;
&lt;br /&gt;
= The V Distributed System = &lt;br /&gt;
&lt;br /&gt;
* First tent in V design: High Performance communication is the most critical facility for distributed systems.&lt;br /&gt;
* Second; The protocols, not the software, define the system.&lt;br /&gt;
* Third; a relatively small operating system kernel can implement the basic protocols and services providing a simple network-transparent process, address space &amp;amp; communication model.&lt;br /&gt;
&lt;br /&gt;
=== Ideas that significantly affected the design ===&lt;br /&gt;
* Shared Memory.&lt;br /&gt;
* Dealing with group of entities same as they deal w/individual entities.&lt;br /&gt;
* Efficient file caching mechanism using the virtual memory caching mechanism.&lt;br /&gt;
&lt;br /&gt;
=== Design Decisions ===&lt;br /&gt;
* Designed for a cluster of workstations with high speed network access ( only really supports LAN ).&lt;br /&gt;
* Abstract the physical architecture of the participating workstations, by defining common protocols providing well-defined interfaces.&lt;br /&gt;
&lt;br /&gt;
V was run on LAN, and its developers developed a really fast IPC protocol which allowed for it to be a fasted distributed operating system in a small geographic area. Aside from the IPC protocols, V also implemented RPC calls in the background.&lt;br /&gt;
&lt;br /&gt;
V uses the strong consistency model. This model can cause issues with concurrency because in V files are a memory space. Thus two different users accessing the same file and in fact accessing the same memory location. This could result in issues unless there is an effective implementation to deal with multiple versions, etc.&lt;br /&gt;
&lt;br /&gt;
VMTP protocol was used for communication. It supports request-respond behavior. Besides, it provides transparency, group communication facility and flow control. It is pretty much like TCP.&lt;/div&gt;</summary>
		<author><name>Aimbot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=19911</id>
		<title>DistOS 2015W Session 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=19911"/>
		<updated>2015-02-28T16:59:14Z</updated>

		<summary type="html">&lt;p&gt;Aimbot: Formatting, updates and additions to both sections.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Ceph =&lt;br /&gt;
Unlike GFS that was talked about previously, this is a general purpose distributed file system. It follows the same general model of distribution as GFS and Amoeba.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Components ==&lt;br /&gt;
* Client&lt;br /&gt;
&lt;br /&gt;
* Cluster of Object Storage Devices (OSD)&lt;br /&gt;
** It basically stores data and metadata and clients communicate directly with it to perform IO operations&lt;br /&gt;
** Data is stored in objects (variable size chunks)&lt;br /&gt;
&lt;br /&gt;
* Meta-data Server (MDS)&lt;br /&gt;
** It is used to manage the file and directories. Clients interact with it to perform metadata operations like open, rename. It manages the capabilities of a client.&lt;br /&gt;
** Clients &#039;&#039;&#039;do not&#039;&#039;&#039; need to access MDSs to find where data is stored, improving scalability (more on that below)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
* Decoupled data and metadata&lt;br /&gt;
&lt;br /&gt;
* Dynamic Distributed Metadata Management&lt;br /&gt;
&lt;br /&gt;
** It distributes the metadata among multiple metadata servers using dynamic sub-tree partitioning, meaning folders that get used more often get their meta-data replicated to more servers, spreading the load. This happens completely automatically&lt;br /&gt;
&lt;br /&gt;
* Object based storage&lt;br /&gt;
** Uses cluster of OSDs to form a Reliable Autonomic Distributed Object-Store (RADOS) for Ceph failure detection and recovery&lt;br /&gt;
&lt;br /&gt;
* CRUSH (Controlled, Replicated, Under Scalable, Hashing)&lt;br /&gt;
** The hashing algorithm used to calculate the location of object instead of looking for them&lt;br /&gt;
** This significantly reduces the load on the MDSs&lt;br /&gt;
** Responsible for automatically moving data when ODSs are added or removed (can be simplified as &#039;&#039;location = CRUSH(filename) % num_servers&#039;&#039;)&lt;br /&gt;
** The CRUSH paper on Ceph’s website can be [http://ceph.com/papers/weil-crush-sc06.pdf viewed here]&lt;br /&gt;
&lt;br /&gt;
* RADOS (Reliable Autonomic Distributed Object-Store) is the object store for Ceph&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Chubby =&lt;br /&gt;
It is a coarse grained lock service, made and used internally by Google, that serves multiple clients with small number of servers (chubby cell).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== System Components ==&lt;br /&gt;
* Chubby Cell&lt;br /&gt;
** Handles the actual locks&lt;br /&gt;
** Mainly consists of 5 servers known as replicas&lt;br /&gt;
** Consensus protocol ([https://en.wikipedia.org/wiki/Paxos_(computer_science) Paxos]) is used to elect the master from replicas&lt;br /&gt;
&lt;br /&gt;
* Client&lt;br /&gt;
** Used by programs to request and use locks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Features ==&lt;br /&gt;
* Implemented as a semi POSIX-compliant file-system with a 256KB file limit&lt;br /&gt;
** Permissions are for files only, not folders, thus breaking some compatibility&lt;br /&gt;
** Trivial to use by programs; just use the standard &#039;&#039;fopen()&#039;&#039; family of calls&lt;br /&gt;
&lt;br /&gt;
* Uses consensus algorithm among a set of servers to agree on who is the master that is in charge of the metadata&lt;br /&gt;
&lt;br /&gt;
* Meant for locks that last hours or days, not seconds (thus, &amp;quot;coarse grained&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
* A master server can handle tens of thousands of simultaneous connections&lt;br /&gt;
** This can be further improved by using caching servers as most of the traffic are keep-alive messages&lt;br /&gt;
&lt;br /&gt;
* Used by GFS for electing master server&lt;br /&gt;
&lt;br /&gt;
* Also used by Google as a nameserver&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Issues ==&lt;br /&gt;
* Due to the use of Paxos (the only algorithm for this problem), the chubby cell is limited to only 5 servers. While this limits fault-tolerance, in practice this is more than enough.&lt;br /&gt;
* Since it has a file-system client interface, many programmers tend to abuse the system and need education (even inside Google)&lt;/div&gt;</summary>
		<author><name>Aimbot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=19816</id>
		<title>DistOS 2015W Session 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=19816"/>
		<updated>2015-02-07T19:42:21Z</updated>

		<summary type="html">&lt;p&gt;Aimbot: Cleaned up, fixed typos, expanded some information.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Multics =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Sameer, Shivjot, Ambalica, Veena&lt;br /&gt;
&lt;br /&gt;
It came into being in the 1960s and it completely vanished in 2000s. It was started by Bell, General Electric and MIT but Bell backed out of the project in 1969.&lt;br /&gt;
Multics is a time sharing OS which provides multitasking and multiprogramming. It is not a distributed OS but it a centralized system which was written in the assembly language.&lt;br /&gt;
&lt;br /&gt;
It provides following features:&lt;br /&gt;
# Utility Computing&lt;br /&gt;
# Access Control Lists&lt;br /&gt;
# Single level storage&lt;br /&gt;
# Dynamic linking&lt;br /&gt;
#* Sharded libraries or files can be loaded and linked to Random Access Memory at run time&lt;br /&gt;
# Hot swapping&lt;br /&gt;
# Multiprocessing System&lt;br /&gt;
# Ring oriented Security&lt;br /&gt;
#* It provides number of levels of authorization within the computer system&lt;br /&gt;
#* Still present in some form today, inside both processors (like x86) and operating systems&lt;br /&gt;
&lt;br /&gt;
= Unix =&lt;br /&gt;
&lt;br /&gt;
Unix in its original conception is a small, minimal API system designed by two guys from Bell Labs. It was essentially a OS that would be easy to grasp for an programmer but not much beyond that. The UNIX OS ran on one computer, and terminals ran from that one computer. Thus it is not a distributed operating system as it is centralized and implements time sharing. In fact, it didn&#039;t even have support for networking in the first version.&lt;br /&gt;
&lt;br /&gt;
The C language was created specifically for Unix, as the creators wanted to create a machine-agnostic language for the operating system.&lt;br /&gt;
&lt;br /&gt;
Most features from Unix are still available in present day Unix-based systems. For example, the shell, with its piping capabilities, is still used today in its original form.&lt;br /&gt;
&lt;br /&gt;
= Sun NFS =&lt;br /&gt;
&lt;br /&gt;
The Sun NFS OS implemented networking using RPC connections. These connections are not secure. Sun wanted to encrypt these RPC connections but encryption would result in government regulations that Sun wanted to avoid in order to sell NFS over seas.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Sun wanted to secure their NFS with encryption, but at the time encryption was regulated like munitions in the United States. Exporting any product that had encryption was impossible, but Sun needed those sales abroad. To avoid these regulations, Sun decided to sell the insecure NFS version of the system.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Mert&lt;br /&gt;
&lt;br /&gt;
= Locus =&lt;br /&gt;
&lt;br /&gt;
# Not scalable&lt;br /&gt;
#* The synchronization algorithms were so slow, they only managed to run it on five computers&lt;br /&gt;
#* Every computer stores a copy of every file&lt;br /&gt;
#* Also used CAS to manage files&lt;br /&gt;
# Not efficient with abstractions&lt;br /&gt;
#* Trying to distribute files and processes&lt;br /&gt;
#Allowed for process migration&lt;br /&gt;
#Transparency&lt;br /&gt;
#* It provided network transparency  to “disguise” its distributed context.&lt;br /&gt;
#Dynamic reconfiguration. (It adapts topology changes.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Locus has lots of similarities with the today&#039;s systems. It uses replication and partitioning which are employed in cloud and distributed systems.&lt;br /&gt;
&lt;br /&gt;
= Sprite =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Team&#039;&#039;&#039;: Jamie, Hassan, Khaled&lt;br /&gt;
&lt;br /&gt;
Sprite had the following Design Features:&lt;br /&gt;
# Network Transparency&lt;br /&gt;
# Process Migration, file transfer between computers&lt;br /&gt;
#* User could initiate a process migration to an idle machine, and if the machine was no longer idle; due to it being used by another user, the system would take care of the process migration to another machine&lt;br /&gt;
# Handling Cache Consistency&lt;br /&gt;
#* Sequential file sharing ==&amp;gt; By using a version number for each file&lt;br /&gt;
#* Concurrent write sharing ==&amp;gt; Disable cache to clients, enable write-blocking and other methods&lt;br /&gt;
# Implemented a caching system that sped up performance&lt;br /&gt;
# Implemented a log structured file system&lt;br /&gt;
#* They realized that with increasing amounts of RAM in computers which can be used for caching, writes to the disk were the main bottleneck, not reads.&lt;br /&gt;
#* Log structured file-systems are optimized for writes, as changes to previous data are appended at the current position.&lt;br /&gt;
#* This allows for very fast, sequential writes.&lt;br /&gt;
#* Example: SSD (Solid-state disks)&lt;br /&gt;
&lt;br /&gt;
The main features to take away from the Sprite system is that it implemented a log structured file system, and implemented caching to increase performance.&lt;/div&gt;</summary>
		<author><name>Aimbot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_4&amp;diff=19815</id>
		<title>DistOS 2015W Session 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_4&amp;diff=19815"/>
		<updated>2015-02-07T17:34:43Z</updated>

		<summary type="html">&lt;p&gt;Aimbot: Updated the AFS section.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Andrew File System =&lt;br /&gt;
AFS (Andrew File System) was set up as a direct response to NFS. Essentially universities found issues when they tried to scale NFS in a way that would allow them to share files amongst their staff effectively. AFS was more scalable than NFS because read-write operations happened locally before they were committed to the server (data store).&lt;br /&gt;
&lt;br /&gt;
Since AFS copied files locally when they were opened and only sent the data back when they were closed, all operations during that time are very fast and do need the network. NFS works with files remotely, so there is no data to transfer when opening/closing the file, making those operations instant.&lt;br /&gt;
&lt;br /&gt;
There are several problems with this design, however.&lt;br /&gt;
* The local system must have enough space to temporarily store the file.&lt;br /&gt;
* Opening and closing the files requires a lot of bandwidth for large files. To read even a single byte, the entire file must be retrieved (later versions remedied this).&lt;br /&gt;
* If the close operation fails, the system will not have the updated version of the file. Many programs don&#039;t even check the return value of the close operation, giving users the false impression that everything went well.&lt;br /&gt;
&lt;br /&gt;
Given all this, AFS was suitable for working with small files, not large ones, limiting its usefulness. It is also notoriously annoying to set up as it is geared towards university-sized networks, further limiting its success.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Amoeba Operating System =&lt;br /&gt;
&lt;br /&gt;
=== Capablities: ===&lt;br /&gt;
* Pointer to the object&lt;br /&gt;
* Capability assigning right to perform to some operation to the object ticket &lt;br /&gt;
* Communicate wide area network &lt;br /&gt;
* a kind of ticket or key that allows the holder of the capa- bility to perform some (not neces- sarily all) &lt;br /&gt;
* Each user process owns some collection of capabilities, which together define the set of objects it may access and the types of operations that my ne performed on each &lt;br /&gt;
* After the server has performed the operation, it sends back a reply message that unblocks the client &lt;br /&gt;
* Sending messages, blocking and accepting forms the remote procedure call that can be encapsulate using to make entire remote operation look like local procedure &lt;br /&gt;
* Second field:  used by the sever to identify which of its objects is being addressed server port and object number identify object which operation to performed  &lt;br /&gt;
* Generates 48-bit random number     &lt;br /&gt;
* The third field is the right field which contains a bit map telling which operation the holder of the capability  may performed&lt;br /&gt;
* X11 Window management&lt;br /&gt;
&lt;br /&gt;
=== Thread Management: ===&lt;br /&gt;
* Same process have multiple thread and each process has its own registered counter and stack &lt;br /&gt;
* Behave like process&lt;br /&gt;
* It can synchronized using mutex semaphore &lt;br /&gt;
* File: Multiple thread, &lt;br /&gt;
* Blocked when there&#039;s multiple threads &lt;br /&gt;
* Buttlet thread the mutex&lt;br /&gt;
* The careful reader may have noticed that user process can pull 813kbytes/sec&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= The V Distributed System = &lt;br /&gt;
&lt;br /&gt;
* First tent in V design: High Performance communication is the most critical facility for distributed systems.&lt;br /&gt;
* Second; The protocols, not the software, define the system.&lt;br /&gt;
* Third; a relatively small operating system kernel can implement the basic protocols and services providing a simple network-transparent process, address space &amp;amp; communication model.&lt;br /&gt;
&lt;br /&gt;
=== Ideas that significantly affected the design ===&lt;br /&gt;
* Shared Memory.&lt;br /&gt;
* Dealing with group of entities same as they deal w/individual entities.&lt;br /&gt;
* Efficient file caching mechanism using the virtual memory caching mechanism.&lt;br /&gt;
&lt;br /&gt;
=== Design Decisions ===&lt;br /&gt;
* Designed for a cluster of workstations with high speed network access ( only really supports LAN ).&lt;br /&gt;
* Abstract the physical architecture of the participating workstations, by defining common protocols providing well-defined interfaces.&lt;br /&gt;
&lt;br /&gt;
V was run on LAN, and its developers developed a really fast IPC protocol which allowed for it to be a fasted distributed operating system in a small geographic area. Aside from the IPC protocols, V also implemented RPC calls in the background.&lt;br /&gt;
&lt;br /&gt;
V uses the strong consistency model. This model can cause issues with concurrency because in V files are a memory space. Thus two different users accessing the same file and in fact accessing the same memory location. This could result in issues unless there is an effective implementation to deal with multiple versions, etc.&lt;br /&gt;
&lt;br /&gt;
VMTP protocol was used for communication. It supports request-respond behavior. Besides, it provides transparency, group communication facility and flow control. It is pretty much like TCP.&lt;/div&gt;</summary>
		<author><name>Aimbot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=19808</id>
		<title>DistOS 2015W Session 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=19808"/>
		<updated>2015-02-07T15:56:24Z</updated>

		<summary type="html">&lt;p&gt;Aimbot: Formatted and added to the GFS section.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= The Clouds Distributed Operating System =&lt;br /&gt;
It is a distributed OS running on a set of computers that are interconnected by a group of network. It basically unifies different computers into a single component.&lt;br /&gt;
&lt;br /&gt;
The OS is based on 2 patterns:&lt;br /&gt;
* Message Based OS&lt;br /&gt;
* Object Based  OS&lt;br /&gt;
&lt;br /&gt;
== Object Thread Model ==&lt;br /&gt;
&lt;br /&gt;
The structure of this is based on object thread model. It has set of objects which are defined by the class. Objects respond to messages. Sending message to object causes object to execute the method and then reply back. &lt;br /&gt;
&lt;br /&gt;
The system has &#039;&#039;&#039;active objects&#039;&#039;&#039; and &#039;&#039;&#039;passive objects&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
# Active objects are the objects which have one or more processes associated with them and further they can communicate with the external environment. &lt;br /&gt;
# Passive objects are those that currently do not have an active thread executing in them.&lt;br /&gt;
&lt;br /&gt;
The content of the Clouds data is long lived. Since the memory is implemented as a single-level store, the data exists forever and can survive system crashes and shut downs.&lt;br /&gt;
&lt;br /&gt;
== Threads ==&lt;br /&gt;
&lt;br /&gt;
The threads are the logical path of execution that traverse objects and executes code in them. The Clouds thread is not bound to a single address space. Several threads can enter an object simultaneously and execute concurrently. The nature of the Clouds object prohibits a thread from accessing any data outside the current address space in which it is executing.&lt;br /&gt;
&lt;br /&gt;
== Interaction Between Objects and Threads ==&lt;br /&gt;
&lt;br /&gt;
# Inter object interfaces are procedural&lt;br /&gt;
# Invocations work across machine boundaries&lt;br /&gt;
# Objects in clouds unify concept of persistent storage and memory to create address space, thus making the programming simpler.&lt;br /&gt;
# Control flow achieved by threads invoking objects.&lt;br /&gt;
&lt;br /&gt;
== Clouds Environment ==&lt;br /&gt;
&lt;br /&gt;
# Integrates set of homogeneous machines into one seamless environment&lt;br /&gt;
# There are three logical categories of machines- Compute Server, User Workstation and Data server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Plan 9 =&lt;br /&gt;
&lt;br /&gt;
Plan 9 is a general purpose, multi-user and mobile computing environment physically distributed across machines. Development of the system began in the late 1980s. The system was built at Bell Labs - the birth place of Unix. The original Unix OS had no support for networking, and there were many attempts over the years by others to create distributed systems with Unix compatibility. Plan 9, however, is distributed systems done following the original Unix philosophy.&lt;br /&gt;
&lt;br /&gt;
The goals of this system were:&lt;br /&gt;
# To built a distributed system that can be centrally administered.&lt;br /&gt;
# Be cost effective using cheap, modern microcomputers. &lt;br /&gt;
&lt;br /&gt;
The distribution itself is transparent to most programs. This property is made possible by 2 properties:&lt;br /&gt;
# A per process group namespace.&lt;br /&gt;
# Uniform access to most resources by representing them as a file.&lt;br /&gt;
&lt;br /&gt;
== Unix Compatibility ==&lt;br /&gt;
&lt;br /&gt;
The commands, libraries and system calls are similar to that of Unix and therefore a casual user cannot distinguish between these two. The problems in UNIX were too deep to fix but still the various ideas were brought along. The problems addressed badly by UNIX were improved. Old tools were dropped and others were polished and reused.&lt;br /&gt;
&lt;br /&gt;
== Unique Features ==&lt;br /&gt;
&lt;br /&gt;
What actually distinguishes Plan 9 is its &#039;&#039;&#039;organization&#039;&#039;&#039;. Plan 9 is divided along the lines of service function.&lt;br /&gt;
* CPU services and terminals use same kernel.&lt;br /&gt;
* Users may choose to run programs locally or remotely on CPU servers.&lt;br /&gt;
* It lets the user choose whether they want a distributed or centralized system.&lt;br /&gt;
&lt;br /&gt;
The design of Plan 9 is based on 3 principles:&lt;br /&gt;
# Resources are named and accessed like files in hierarchical file system.&lt;br /&gt;
# Standard protocol 9P.&lt;br /&gt;
# Disjoint hierarchical provided by different services are joined together into single private hierarchical file name space.&lt;br /&gt;
&lt;br /&gt;
=== Virtual Namespaces ===&lt;br /&gt;
&lt;br /&gt;
In a virtual namespace, a user boots a terminal or connects to a CPU server and then a new process group is created. Processes in group can either add to or rearrange their name space using two system calls - mount and bind.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Mount&#039;&#039;&#039; is used to attach new file system to a point in name space.&lt;br /&gt;
* &#039;&#039;&#039;Bind&#039;&#039;&#039; is used to attach a kernel resident (existing, mounted) file system to name space and also arrange pieces of name space.&lt;br /&gt;
* There is also &#039;&#039;&#039;unbind&#039;&#039;&#039; which undoes the effects of the other two calls.&lt;br /&gt;
&lt;br /&gt;
Namespaces in Plan 9 are on a per-process basis. While everything had a way reference resources with a unique name, using mount and bind, every process could also build their namespace as they saw fit.&lt;br /&gt;
&lt;br /&gt;
Since most resources are in the form of files (and folders), the term &#039;&#039;namespace&#039;&#039; really only refers to the filesystem layout.&lt;br /&gt;
&lt;br /&gt;
=== Parallel Programming ===&lt;br /&gt;
The parallel programming has two aspects:&lt;br /&gt;
* Kernel provides simple process model and carefully designed system calls for synchronization.&lt;br /&gt;
* Programming language supports concurrent programming.&lt;br /&gt;
&lt;br /&gt;
== Legacy ==&lt;br /&gt;
&lt;br /&gt;
Even though Plan 9 is no longer developed, the good ideas from the system still exist today. For example, the &#039;&#039;/proc&#039;&#039; virtual filesystem which displays current process information in the form of files exists in moden Linux kernels.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Google File System =&lt;br /&gt;
&lt;br /&gt;
It is scalable, distributed file system for large, data intensive applications. It is crafted to Google&#039;s unique needs as a search engine company.&lt;br /&gt;
&lt;br /&gt;
Unlike most filesystems, GFS must be implemented by individual applications and is not part of the kernel. While this introduces some technical overhead, it gives the system more freedom to implement or not implement certain non-standard features.&lt;br /&gt;
&lt;br /&gt;
== Architecture ==&lt;br /&gt;
&lt;br /&gt;
The architecture of the Google file system consists of a single master, multiple chunk-servers and multiple clients. Chunk servers are used to store the data in uniformly sized chunks. Each chunk is identified by globally unique 64 bit handle assigned by master at the time of creation. The chunks are split into 64KB blocks, each with its own hashsum for data integrity checks. The chunks are replicated between servers, 3 by default. The master maintains all the file system metadata which includes the namespace and chunk location.&lt;br /&gt;
&lt;br /&gt;
Each of the chunks is 64 MB large (contrast this to the typical filesystem sectors of 512 or 4096 bytes) as the system is meant to hold enormous amount of data - namely the internet. The large chunk size is also important for the scalability of the system - the larger the chunk size, the less metadata the master server has to store for any given amount of data. With the current size, the master server is able to store the entirety of the metadata in memory, increasing performance by a significant margin.&lt;br /&gt;
&lt;br /&gt;
== Operation ==&lt;br /&gt;
&lt;br /&gt;
Master and Chunk server communication consists of&lt;br /&gt;
# checking whether there is any chunk-server is down.&lt;br /&gt;
# checking if any file is corrupted.&lt;br /&gt;
# deleting stale chunks&lt;br /&gt;
&lt;br /&gt;
When a client want to do some operations on the chunks&lt;br /&gt;
# it first asks the master server for the list of servers that store the parts of a file it wants to access&lt;br /&gt;
# it receives a list of chunk servers, with multiple servers for each chunk&lt;br /&gt;
# it finally communicates with the the chunk servers to perform the operation&lt;br /&gt;
&lt;br /&gt;
The system is geared towards appends and sequential reads. This is why the master server responds with multiple server addresses for each chunk - the client can then request a small piece from each server, increasing the data throughput linearly with the number of servers. Writes, in general, are in the form of a special &#039;&#039;append&#039;&#039; system call. When appending, there is no chance that two clients will want to write to the same location at the same time. This helps avoid any potential synchronization issues. If there are multiple appends to the same file at the same time, the chunk servers are free to order them as they wish (chunks on each server are not guaranteed to be byte-for-byte identical). While a problem in the general sense, this is good enough for Google&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
&lt;br /&gt;
GFS is built with failure in mind. The system expects that at any time, there is some server or disk that is malfunctioning. The system deals with the failures as follows.&lt;br /&gt;
&lt;br /&gt;
=== Chunk Servers ===&lt;br /&gt;
&lt;br /&gt;
By default, chunks are replicated to three servers. This exact number depends on the application in question doing the write. When a chunk server finds that some of its data is corrupt, it grabs the data from other servers to repair itself{{Citation needed}}.&lt;br /&gt;
&lt;br /&gt;
=== Master Server ===&lt;br /&gt;
&lt;br /&gt;
For efficiency, there is only a single live master server at a time. While not making the system completely distributed, it avoid many synchronization problems and suits Google&#039;s needs. At any point in time, there are multiple read-only master servers that copy metadata from the currently live master. Should it go down, they will serve any read operations from the clients, until one of the hot spares is promoted to being the new live master server.&lt;/div&gt;</summary>
		<author><name>Aimbot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=19805</id>
		<title>DistOS 2015W Session 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=19805"/>
		<updated>2015-02-07T15:23:36Z</updated>

		<summary type="html">&lt;p&gt;Aimbot: Formatted and edited Plan 9 section&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= The Clouds Distributed Operating System =&lt;br /&gt;
It is a distributed OS running on a set of computers that are interconnected by a group of network. It basically unifies different computers into a single component.&lt;br /&gt;
&lt;br /&gt;
The OS is based on 2 patterns:&lt;br /&gt;
* Message Based OS&lt;br /&gt;
* Object Based  OS&lt;br /&gt;
&lt;br /&gt;
== Object Thread Model ==&lt;br /&gt;
&lt;br /&gt;
The structure of this is based on object thread model. It has set of objects which are defined by the class. Objects respond to messages. Sending message to object causes object to execute the method and then reply back. &lt;br /&gt;
&lt;br /&gt;
The system has &#039;&#039;&#039;active objects&#039;&#039;&#039; and &#039;&#039;&#039;passive objects&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
# Active objects are the objects which have one or more processes associated with them and further they can communicate with the external environment. &lt;br /&gt;
# Passive objects are those that currently do not have an active thread executing in them.&lt;br /&gt;
&lt;br /&gt;
The content of the Clouds data is long lived. Since the memory is implemented as a single-level store, the data exists forever and can survive system crashes and shut downs.&lt;br /&gt;
&lt;br /&gt;
== Threads ==&lt;br /&gt;
&lt;br /&gt;
The threads are the logical path of execution that traverse objects and executes code in them. The Clouds thread is not bound to a single address space. Several threads can enter an object simultaneously and execute concurrently. The nature of the Clouds object prohibits a thread from accessing any data outside the current address space in which it is executing.&lt;br /&gt;
&lt;br /&gt;
== Interaction Between Objects and Threads ==&lt;br /&gt;
&lt;br /&gt;
# Inter object interfaces are procedural&lt;br /&gt;
# Invocations work across machine boundaries&lt;br /&gt;
# Objects in clouds unify concept of persistent storage and memory to create address space, thus making the programming simpler.&lt;br /&gt;
# Control flow achieved by threads invoking objects.&lt;br /&gt;
&lt;br /&gt;
== Clouds Environment ==&lt;br /&gt;
&lt;br /&gt;
# Integrates set of homogeneous machines into one seamless environment&lt;br /&gt;
# There are three logical categories of machines- Compute Server, User Workstation and Data server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Plan 9 =&lt;br /&gt;
&lt;br /&gt;
Plan 9 is a general purpose, multi-user and mobile computing environment physically distributed across machines. Development of the system began in the late 1980s. The system was built at Bell Labs - the birth place of Unix. The original Unix OS had no support for networking, and there were many attempts over the years by others to create distributed systems with Unix compatibility. Plan 9, however, is distributed systems done following the original Unix philosophy.&lt;br /&gt;
&lt;br /&gt;
The goals of this system were:&lt;br /&gt;
# To built a distributed system that can be centrally administered.&lt;br /&gt;
# Be cost effective using cheap, modern microcomputers. &lt;br /&gt;
&lt;br /&gt;
The distribution itself is transparent to most programs. This property is made possible by 2 properties:&lt;br /&gt;
# A per process group namespace.&lt;br /&gt;
# Uniform access to most resources by representing them as a file.&lt;br /&gt;
&lt;br /&gt;
== Unix Compatibility ==&lt;br /&gt;
&lt;br /&gt;
The commands, libraries and system calls are similar to that of Unix and therefore a casual user cannot distinguish between these two. The problems in UNIX were too deep to fix but still the various ideas were brought along. The problems addressed badly by UNIX were improved. Old tools were dropped and others were polished and reused.&lt;br /&gt;
&lt;br /&gt;
== Unique Features ==&lt;br /&gt;
&lt;br /&gt;
What actually distinguishes Plan 9 is its &#039;&#039;&#039;organization&#039;&#039;&#039;. Plan 9 is divided along the lines of service function.&lt;br /&gt;
* CPU services and terminals use same kernel.&lt;br /&gt;
* Users may choose to run programs locally or remotely on CPU servers.&lt;br /&gt;
* It lets the user choose whether they want a distributed or centralized system.&lt;br /&gt;
&lt;br /&gt;
The design of Plan 9 is based on 3 principles:&lt;br /&gt;
# Resources are named and accessed like files in hierarchical file system.&lt;br /&gt;
# Standard protocol 9P.&lt;br /&gt;
# Disjoint hierarchical provided by different services are joined together into single private hierarchical file name space.&lt;br /&gt;
&lt;br /&gt;
=== Virtual Namespaces ===&lt;br /&gt;
&lt;br /&gt;
In a virtual namespace, a user boots a terminal or connects to a CPU server and then a new process group is created. Processes in group can either add to or rearrange their name space using two system calls - mount and bind.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Mount&#039;&#039;&#039; is used to attach new file system to a point in name space.&lt;br /&gt;
* &#039;&#039;&#039;Bind&#039;&#039;&#039; is used to attach a kernel resident (existing, mounted) file system to name space and also arrange pieces of name space.&lt;br /&gt;
* There is also &#039;&#039;&#039;unbind&#039;&#039;&#039; which undoes the effects of the other two calls.&lt;br /&gt;
&lt;br /&gt;
Namespaces in Plan 9 are on a per-process basis. While everything had a way reference resources with a unique name, using mount and bind, every process could also build their namespace as they saw fit.&lt;br /&gt;
&lt;br /&gt;
Since most resources are in the form of files (and folders), the term &#039;&#039;namespace&#039;&#039; really only refers to the filesystem layout.&lt;br /&gt;
&lt;br /&gt;
=== Parallel Programming ===&lt;br /&gt;
The parallel programming has two aspects:&lt;br /&gt;
* Kernel provides simple process model and carefully designed system calls for synchronization.&lt;br /&gt;
* Programming language supports concurrent programming.&lt;br /&gt;
&lt;br /&gt;
== Legacy ==&lt;br /&gt;
&lt;br /&gt;
Even though Plan 9 is no longer developed, the good ideas from the system still exist today. For example, the &#039;&#039;/proc&#039;&#039; virtual filesystem which displays current process information in the form of files exists in moden Linux kernels.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Google File System =&lt;br /&gt;
&lt;br /&gt;
It is scalable file system for large distributed data intensive applications. The design is driven by providing previous applications workloads and technical environments, both current and anticipated. &lt;br /&gt;
&lt;br /&gt;
The architecture of the Google file system consists of a single master, multiple chunk-servers and multiple clients. These chunk-servers store the data or file in unit of named chunks. Each chunk is identified by globally unique 64 bit chunk handle assigned by master at the end of the time of chunk creation. For more reliability and availability chunks are replicated  on more chunk servers.  The master maintains all the file system meta data which include the name space, chunk location and also the access control information.&lt;br /&gt;
&lt;br /&gt;
Master and Chunk-Server Communication:&lt;br /&gt;
a) To check whether there is any chunk-server is down&lt;br /&gt;
b) To check if any file is corrupted.&lt;br /&gt;
c) Whether to create or delete any chunk.&lt;br /&gt;
&lt;br /&gt;
Operation of GFS:&lt;br /&gt;
a) Client communicate with master to get the matadata. &lt;br /&gt;
b) client get chunk location from matadata.&lt;br /&gt;
c) Communicate with the one of that chunk-server to retrieve the data to perform operations on it.&lt;/div&gt;</summary>
		<author><name>Aimbot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=19804</id>
		<title>DistOS 2015W Session 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=19804"/>
		<updated>2015-02-07T14:57:00Z</updated>

		<summary type="html">&lt;p&gt;Aimbot: Formatted the clouds section&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= The Clouds Distributed Operating System =&lt;br /&gt;
It is a distributed OS running on a set of computers that are interconnected by a group of network. It basically unifies different computers into a single component.&lt;br /&gt;
&lt;br /&gt;
The OS is based on 2 patterns:&lt;br /&gt;
* Message Based OS&lt;br /&gt;
* Object Based  OS&lt;br /&gt;
&lt;br /&gt;
== Object Thread Model ==&lt;br /&gt;
&lt;br /&gt;
The structure of this is based on object thread model. It has set of objects which are defined by the class. Objects respond to messages. Sending message to object causes object to execute the method and then reply back. &lt;br /&gt;
&lt;br /&gt;
The system has &#039;&#039;&#039;active objects&#039;&#039;&#039; and &#039;&#039;&#039;passive objects&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
# Active objects are the objects which have one or more processes associated with them and further they can communicate with the external environment. &lt;br /&gt;
# Passive objects are those that currently do not have an active thread executing in them.&lt;br /&gt;
&lt;br /&gt;
The content of the Clouds data is long lived. Since the memory is implemented as a single-level store, the data exists forever and can survive system crashes and shut downs.&lt;br /&gt;
&lt;br /&gt;
== Threads ==&lt;br /&gt;
&lt;br /&gt;
The threads are the logical path of execution that traverse objects and executes code in them. The Clouds thread is not bound to a single address space. Several threads can enter an object simultaneously and execute concurrently. The nature of the Clouds object prohibits a thread from accessing any data outside the current address space in which it is executing.&lt;br /&gt;
&lt;br /&gt;
== Interaction Between Objects and Threads ==&lt;br /&gt;
&lt;br /&gt;
# Inter object interfaces are procedural&lt;br /&gt;
# Invocations work across machine boundaries&lt;br /&gt;
# Objects in clouds unify concept of persistent storage and memory to create address space, thus making the programming simpler.&lt;br /&gt;
# Control flow achieved by threads invoking objects.&lt;br /&gt;
&lt;br /&gt;
== Clouds Environment ==&lt;br /&gt;
&lt;br /&gt;
# Integrates set of homogeneous machines into one seamless environment&lt;br /&gt;
# There are three logical categories of machines- Compute Server, User Workstation and Data server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Plan 9 =&lt;br /&gt;
&lt;br /&gt;
Plan 9 is a general purpose, multiuser and mobile computing environment physically distributed across machines. &lt;br /&gt;
The Plan 9 began in late 1980s. The aims of this system are:&lt;br /&gt;
1) To built a system that should be centrally administered &lt;br /&gt;
2) Cost effective using cheap modern microcomputers. &lt;br /&gt;
The distribution itself is transparent to most programs.&lt;br /&gt;
This property is made possible by 2 properties:&lt;br /&gt;
1) A per process group name space&lt;br /&gt;
2) A uniform access to all the resources by representing them as a file.&lt;br /&gt;
&lt;br /&gt;
It is quite similar to the Unix yet so different. The commands, libraries and system calls are similar to that of Unix and therefore a casual user cannot distinguish between these two. The problems in UNIX were too deep to fix but still the various ideas were brought along. The problems addressed badly by UNIX were improved. Old tools were dropped and others were polished and reused.&lt;br /&gt;
&lt;br /&gt;
What actually distinguishes Plan 9 is its &#039;&#039;&#039;organization&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Plan 9 is divided along the lines of service function. &lt;br /&gt;
* CPU services and terminals use same kernel&lt;br /&gt;
* Users may choose to run programs locally or remotely on CPU servers&lt;br /&gt;
*Gives the user a choice to choose whether they want distributed or centralized.&lt;br /&gt;
&lt;br /&gt;
The design of Plan 9 is based on 3 principles:&lt;br /&gt;
1) Resources are named and accessed like files in hierarchical file system.&lt;br /&gt;
2) Standard protocol 9P&lt;br /&gt;
3) Disjoint hierarchical provided by different services are joined together into single private hierarchal file name space.&lt;br /&gt;
&lt;br /&gt;
Another concept in Plan 9 is the &#039;&#039;&#039;Virtual Name Space&#039;&#039;&#039;&lt;br /&gt;
In a &#039;&#039;&#039;Virtual Name Space&#039;&#039;&#039;, a user boots a terminal or connects to a CPU server and then a new process group is created. &lt;br /&gt;
Processes in group can either add to or rearrange their name space using two system calls- [[Mount]] and [[Bind]]&lt;br /&gt;
* &#039;&#039;&#039;Mount&#039;&#039;&#039; is used to attach new file system to a point in name space.&lt;br /&gt;
*&#039;&#039;&#039;Bind&#039;&#039;&#039; is used to attach a kernel resident file system to name space and also arrange pieces of name space.&lt;br /&gt;
&lt;br /&gt;
The plan 9 provides mechanism to customize one&#039;s view of the system with the help of the software rather than the hardware.&lt;br /&gt;
It is built for the traditional system but it can be extended to the other resources. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Parallel Programming&#039;&#039;&#039;&lt;br /&gt;
The parallel programming has two aspects:&lt;br /&gt;
* Kernel provides simple process model and carefully designed system calls for synchronization.&lt;br /&gt;
*Programming language supports concurrent programming.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Implementation of Name Spaces&#039;&#039;&#039;&lt;br /&gt;
User processes construct name specs using three system calls- mount, bind, unmount.&lt;br /&gt;
Mount- System call attaches a tree served by a file server to the current name specs&lt;br /&gt;
Bind-Duplicates pieces of existing name specs at another point&lt;br /&gt;
Unmount- Allows components to be removed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Google File System =&lt;br /&gt;
&lt;br /&gt;
It is scalable file system for large distributed data intensive applications. The design is driven by providing previous applications workloads and technical environments, both current and anticipated. &lt;br /&gt;
&lt;br /&gt;
The architecture of the Google file system consists of a single master, multiple chunk-servers and multiple clients. These chunk-servers store the data or file in unit of named chunks. Each chunk is identified by globally unique 64 bit chunk handle assigned by master at the end of the time of chunk creation. For more reliability and availability chunks are replicated  on more chunk servers.  The master maintains all the file system meta data which include the name space, chunk location and also the access control information.&lt;br /&gt;
&lt;br /&gt;
Master and Chunk-Server Communication:&lt;br /&gt;
a) To check whether there is any chunk-server is down&lt;br /&gt;
b) To check if any file is corrupted.&lt;br /&gt;
c) Whether to create or delete any chunk.&lt;br /&gt;
&lt;br /&gt;
Operation of GFS:&lt;br /&gt;
a) Client communicate with master to get the matadata. &lt;br /&gt;
b) client get chunk location from matadata.&lt;br /&gt;
c) Communicate with the one of that chunk-server to retrieve the data to perform operations on it.&lt;/div&gt;</summary>
		<author><name>Aimbot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2015&amp;diff=19674</id>
		<title>Distributed OS: Winter 2015</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2015&amp;diff=19674"/>
		<updated>2015-01-14T21:56:27Z</updated>

		<summary type="html">&lt;p&gt;Aimbot: /* Notes */ Updated link to second set of notes.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Course Outline==&lt;br /&gt;
&lt;br /&gt;
[[Distributed OS: Winter 2015 Course Outline|Here]] is the course outline.  It should see only minor modifications during the semester.&lt;br /&gt;
&lt;br /&gt;
==Assigned Readings==&lt;br /&gt;
&lt;br /&gt;
===January 12, 2015===&lt;br /&gt;
&lt;br /&gt;
The Early Internet:&lt;br /&gt;
* [https://homeostasis.scs.carleton.ca/~soma/distos/2014w/kahn1972-resource.pdf Robert E. Kahn, &amp;quot;Resource-Sharing Computer Communications Networks&amp;quot; (1972)]  [http://dx.doi.org/10.1109/PROC.1972.8911 (DOI)]&lt;br /&gt;
* [https://archive.org/details/ComputerNetworks_TheHeraldsOfResourceSharing Computer Networks: The Heralds of Resource Sharing (1972)] - video&lt;br /&gt;
&lt;br /&gt;
The Alto:&lt;br /&gt;
* [https://homeostasis.scs.carleton.ca/~soma/distos/2014w/alto.pdf Thacker et al., &amp;quot;Alto: A Personal computer&amp;quot; (1979)]  ([https://archive.org/details/bitsavers_xeroxparcttoAPersonalComputer_6560658 archive.org])&lt;br /&gt;
&lt;br /&gt;
The Mother of All Demos:&lt;br /&gt;
* [http://www.dougengelbart.org/firsts/dougs-1968-demo.html Doug Engelbart Institute, &amp;quot;Doug&#039;s 1968 Demo&amp;quot;].  You may want to focus on the [http://dougengelbart.org/events/1968-demo-highlights.html highlights] or the [http://sloan.stanford.edu/MouseSite/1968Demo.html annotated clips].&lt;br /&gt;
* [http://en.wikipedia.org/wiki/The_Mother_of_All_Demos Wikipedia&#039;s page on &amp;quot;The Mother of all Demos&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
===January 19, 2015===&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Multics Wikipedia article on Multics]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/fall2008/unix.pdf Dennis M. Ritchie and Ken Thompson, &amp;quot;The UNIX Time-Sharing System&amp;quot; (1974)]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2008-01-21/walker-locus.pdf Bruce Walker et al., &amp;quot;The LOCUS Distributed Operating System.&amp;quot; (1983)]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2008-02-11/sandberg-nfs.pdf Russel Sandberg et al., &amp;quot;Design and Implementation of the Sun Network Filesystem&amp;quot; (1985)]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2008-01-28/ousterhout-sprite.pdf John Ousterhout et al., &amp;quot;The Sprite Network Operating System&amp;quot; (1987)]&lt;br /&gt;
&lt;br /&gt;
===January 26, 2015===&lt;br /&gt;
&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2008-01-21/cheriton-v.pdf David R. Cheriton, &amp;quot;The V Distributed System.&amp;quot; (1988)]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2008-01-28/tanenbaum-amoeba.pdf Andrew Tannenbaum et al., &amp;quot;The Amoeba System&amp;quot; (1990)]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2008-01-28/clouds-dasgupta.pdf Partha Dasgupta et al., &amp;quot;The Clouds Distributed Operating System&amp;quot; (1991)]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2008-02-11/howard-afs.pdf John H. Howard et al., &amp;quot;Scale and Performance in a Distributed File System&amp;quot; (1988)]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Other Readings==&lt;br /&gt;
&lt;br /&gt;
===The Early Web===&lt;br /&gt;
&lt;br /&gt;
* [https://archive.org/details/02Kahle000673 Berners-Lee et al., &amp;quot;World-Wide Web: The Information Universe&amp;quot; (1992)], pp. 52-58&lt;br /&gt;
* [http://www.youtube.com/watch?v=72nfrhXroo8 Alex Wright, &amp;quot;The Web That Wasn&#039;t&amp;quot; (2007)], Google Tech Talk&lt;br /&gt;
&lt;br /&gt;
===Plan 9===&lt;br /&gt;
&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2014w/presotto-plan9.pdf Presotto et. al, Plan 9, A Distributed System (1991)]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2014w/pike-plan9.pdf Pike et al., Plan 9 from Bell Labs (1995)]&lt;br /&gt;
&lt;br /&gt;
===GFS and Ceph===&lt;br /&gt;
* [http://research.google.com/archive/gfs-sosp2003.pdf Sanjay Ghemawat et al., &amp;quot;The Google File System&amp;quot; (SOSP 2003)]&lt;br /&gt;
* [http://www.usenix.org/events/osdi06/tech/weil.html Weil et al., Ceph: A Scalable, High-Performance Distributed File System (OSDI 2006)].&lt;br /&gt;
&lt;br /&gt;
===Chubby===&lt;br /&gt;
&lt;br /&gt;
* [https://www.usenix.org/legacy/events/osdi06/tech/burrows.html Burrows, The Chubby Lock Service for Loosely-Coupled Distributed Systems (OSDI 2006)]&lt;br /&gt;
&lt;br /&gt;
===Oceanstore===&lt;br /&gt;
 &lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/fall2008/oceanstore-sigplan.pdf John Kubiatowicz et al., &amp;quot;OceanStore: An Architecture for Global-Scale Persistent Storage&amp;quot; (2000)]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/fall2008/fast2003-pond.pdf Sean Rhea et al., &amp;quot;Pond: the OceanStore Prototype&amp;quot; (2003)]&lt;br /&gt;
&lt;br /&gt;
===Farsite===&lt;br /&gt;
&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/fall2008/adya-farsite-intro.pdf Atul Adya et al.,&amp;quot;FARSITE: Federated, Available, and Reliable Storage for an Incompletely Trusted Environment&amp;quot; (2002)]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/fall2008/bolosky-farsite-retro.pdf William J. Bolosky et al., &amp;quot;The Farsite Project: A Retrospective&amp;quot; (2007)]&lt;br /&gt;
&lt;br /&gt;
===Public Resource Computing===&lt;br /&gt;
&lt;br /&gt;
* Anderson et al., &amp;quot;SETI@home: An Experiment in Public-Resource Computing&amp;quot; (CACM 2002) [http://dx.doi.org/10.1145/581571.581573 (DOI)] [http://dl.acm.org.proxy.library.carleton.ca/citation.cfm?id=581573 (Proxy)]&lt;br /&gt;
* Anderson, &amp;quot;BOINC: A System for Public-Resource Computing and Storage&amp;quot; (Grid Computing 2004) [http://dx.doi.org/10.1109/GRID.2004.14 (DOI)] [http://ieeexplore.ieee.org.proxy.library.carleton.ca/stamp/stamp.jsp?tp=&amp;amp;arnumber=1382809 (Proxy)]&lt;br /&gt;
&lt;br /&gt;
===Distributed Hash Tables===&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Distributed_hash_table Wikipedia&#039;s article on Distributed Hash Tables]&lt;br /&gt;
* [http://pdos.csail.mit.edu/~strib/docs/tapestry/tapestry_jsac03.pdf Zhao et al, &amp;quot;Tapestry: A Resilient Global-Scale Overlay for Service Deployment&amp;quot; (JSAC 2003)]&lt;br /&gt;
&lt;br /&gt;
===Structured Data===&lt;br /&gt;
&lt;br /&gt;
* [http://research.google.com/archive/bigtable-osdi06.pdf Chang et al., &amp;quot;BigTable: A Distributed Storage System for Structured Data&amp;quot; (OSDI 2006)]&lt;br /&gt;
* [http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf DeCandia et al., &amp;quot;Dynamo: Amazon’s Highly Available Key-value Store&amp;quot; (SOSP 2007)]&lt;br /&gt;
* [http://www.cs.cornell.edu/projects/ladis2009/papers/lakshman-ladis2009.pdf Lakshman &amp;amp; Malik, &amp;quot;Cassandra - A Decentralized Structured Storage System&amp;quot; (LADIS 2009)]&lt;br /&gt;
* [https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Geambasu.pdf Geambasu et al., &amp;quot;Comet: An active distributed key-value store&amp;quot; (OSDI 2010)]&lt;br /&gt;
&lt;br /&gt;
===Specialized Storage===&lt;br /&gt;
&lt;br /&gt;
* [http://static.usenix.org/legacy/events/osdi10/tech/full_papers/Beaver.pdf Beaver et al., &amp;quot;Finding a needle in Haystack: Facebook’s photo storage&amp;quot; (OSDI 2010)]&lt;br /&gt;
&lt;br /&gt;
===Computational Models===&lt;br /&gt;
&lt;br /&gt;
* [http://research.google.com/archive/mapreduce.html Dean &amp;amp; Ghemawat, &amp;quot;MapReduce: Simplified Data Processing on Large Clusters&amp;quot; (OSDI 2004)]&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?doid=2517349.2522738 Murray et al., &amp;quot;Naiad: a timely dataflow system&amp;quot; (SOSP 2013)]&lt;br /&gt;
&lt;br /&gt;
===Literature Review Help===&lt;br /&gt;
&lt;br /&gt;
* Harvey, &amp;quot;What Is a Literature Review?&amp;quot; [http://www.cs.cmu.edu/~missy/WritingaLiteratureReview.doc (DOC)] [http://www.cs.cmu.edu/~missy/Writing_a_Literature_Review.ppt (PPT)]&lt;br /&gt;
* [http://www.writing.utoronto.ca/advice/specific-types-of-writing/literature-review Taylor, &amp;quot;The Literature Review: A Few Tips On Conducting It&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot; class=&amp;quot;wikitable&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr valign=&amp;quot;top&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Date&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Topic&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Jan. 5&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 1|Session 1]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Jan. 12&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 2|Session 2]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Jan. 19&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 3|Session 3]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Jan. 26&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 4|Session 4]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Feb. 2&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 5|Session 5]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Feb. 9&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 6|Session 6]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Feb. 23&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 7|Session 7]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Mar. 2&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 8|Session 8]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Mar. 9&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 9|Session 9]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Mar. 16&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 10|Session 10]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Mar. 23&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 11|Session 11]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Mar. 30&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 12|Session 12]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Apr. 6&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Project Presentations|Final Project Presentations]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;TBA&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;&#039;&#039;&#039;Final Exam&#039;&#039;&#039;&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;/div&gt;</summary>
		<author><name>Aimbot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Session_2&amp;diff=19673</id>
		<title>DistOS 2014W Session 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Session_2&amp;diff=19673"/>
		<updated>2015-01-14T21:55:32Z</updated>

		<summary type="html">&lt;p&gt;Aimbot: Aimbot moved page DistOS 2014W Session 2 to DistOS 2015W Session 2: Wrong year&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[DistOS 2015W Session 2]]&lt;/div&gt;</summary>
		<author><name>Aimbot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_2&amp;diff=19672</id>
		<title>DistOS 2015W Session 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_2&amp;diff=19672"/>
		<updated>2015-01-14T21:55:32Z</updated>

		<summary type="html">&lt;p&gt;Aimbot: Aimbot moved page DistOS 2014W Session 2 to DistOS 2015W Session 2: Wrong year&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Reading Response Discussion =&lt;br /&gt;
&lt;br /&gt;
== Mother of All Demos: ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Jason, Kirill, Sravan, Agustin, Hassan Ambalica, Apoorv, Khaled&lt;br /&gt;
&lt;br /&gt;
* 1968, lead by Doug Inglebard&lt;br /&gt;
* Initial public display of many modern technologies&lt;br /&gt;
* One computer with multiple remote terminals&lt;br /&gt;
* Video conferencing&lt;br /&gt;
* Computer mouse (and coined the term)&lt;br /&gt;
* Word processing, rudimentary copy and paste&lt;br /&gt;
* Dynamic file linking (hypertext)&lt;br /&gt;
* Revision control/version control/source control&lt;br /&gt;
* Collaborative real-time editor&lt;br /&gt;
** User privilege control in that user can provide read-only access, read-write privilege to file&lt;br /&gt;
* Corded keyboard&lt;br /&gt;
** Marco keyboard that allows messages quickly, while using mouse at same time&lt;br /&gt;
* Not really a distributed operating system, but great start because multiple users at different terminals could share same resources&lt;br /&gt;
&lt;br /&gt;
== Early Internet ==&lt;br /&gt;
&lt;br /&gt;
== Alto ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Tory, Veena, Sameer, Mert, Deep, Nameet, Moe&lt;br /&gt;
&lt;br /&gt;
* Initially developed around 1973 by Xerox, upgraded over next few years&lt;br /&gt;
* High speed network connectivity (3 Mbps @ 1km distance)&lt;br /&gt;
* Connected up to 256 machines&lt;br /&gt;
* Protocol similar to cross between UDP/TCP, before TCP invented&lt;br /&gt;
* Allowed sharing of printers&lt;br /&gt;
* Also allowed distribution of files across computers (redundancy/reliability)&lt;br /&gt;
* Sort of early cloud&lt;br /&gt;
* Allowed for remote debugging, store error logs&lt;br /&gt;
* Allowed machines to use processing power of others&lt;br /&gt;
* Much of the time it would be idling, which is amazing at a time when computers cost a fortune&lt;br /&gt;
&lt;br /&gt;
= Professor lead discussion 1 = &lt;br /&gt;
&lt;br /&gt;
A true distributed operating system does not actually exist, its more of a dream. &lt;br /&gt;
&lt;br /&gt;
You&#039;ll find in the history of trying to achieve this dream of a distributed operating system there has always been a road block as a result of a technical issue. People would come up with a solution to this technical issue but then they would find that some other technical issue would crop up. For example &#039;The Mother of All Demos&#039; tried to build a distributed operating system but they had no networking. They had to point a television camera at the computer monitor to be able to demo their concept. The early internet came along and dealt with the issue of no networking. However the early internet itself had other technical issues.&lt;br /&gt;
&lt;br /&gt;
A common buzz word during the early days of development was &#039;&#039;&#039;time sharing&#039;&#039;&#039;, which is an old term for multiple processes running simultaneously. However, at the time it referred to multiple users sharing the CPU cycles on a single computer. Today, a single user&#039;s many processes using a single CPU is much more common.&lt;br /&gt;
&lt;br /&gt;
= Discussion: Easy on one computer, hard on multiple computers =&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Instant Messaging&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|-&lt;br /&gt;
| Can&#039;t be out sync&lt;br /&gt;
| Synchronization issues&lt;br /&gt;
* Server-client modal&lt;br /&gt;
** If two or more servers they get requests same time, they have to figure out how to synchronize the data&lt;br /&gt;
* Peer-to-Peer&lt;br /&gt;
** Each peer has to figure out how to sync each message&lt;br /&gt;
|-&lt;br /&gt;
| Only one data store/source&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Mortal Kombat/Twitch Gaming&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Photo Album&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Commonalities ===&lt;br /&gt;
* Synchronization&lt;br /&gt;
* Bandwidth&lt;br /&gt;
* Reliability/Fault Tolerance&lt;br /&gt;
* Interoperability&lt;br /&gt;
* Discovery&lt;br /&gt;
* Routing&lt;br /&gt;
** In modern system this issue has been mostly abstracted away&lt;br /&gt;
** Classic example would be Wireless Ad-hoc networking&lt;br /&gt;
&lt;br /&gt;
The above list is not hard on a single system because all the cores have equal access to all the resources. Also as a result of excellent engineering modern single systems can very effective deal with any errors that may result.&lt;/div&gt;</summary>
		<author><name>Aimbot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=User:Aimbot&amp;diff=19671</id>
		<title>User:Aimbot</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=User:Aimbot&amp;diff=19671"/>
		<updated>2015-01-14T03:21:31Z</updated>

		<summary type="html">&lt;p&gt;Aimbot: Created page with &amp;quot;https://i.imgur.com/PqtnWTP.jpg&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;https://i.imgur.com/PqtnWTP.jpg&lt;/div&gt;</summary>
		<author><name>Aimbot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2015&amp;diff=19670</id>
		<title>Distributed OS: Winter 2015</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2015&amp;diff=19670"/>
		<updated>2015-01-14T03:18:12Z</updated>

		<summary type="html">&lt;p&gt;Aimbot: Added standard wikipedia table CSS class to the table of notes.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Course Outline==&lt;br /&gt;
&lt;br /&gt;
[[Distributed OS: Winter 2015 Course Outline|Here]] is the course outline.  It should see only minor modifications during the semester.&lt;br /&gt;
&lt;br /&gt;
==Assigned Readings==&lt;br /&gt;
&lt;br /&gt;
===January 12, 2015===&lt;br /&gt;
&lt;br /&gt;
The Early Internet:&lt;br /&gt;
* [https://homeostasis.scs.carleton.ca/~soma/distos/2014w/kahn1972-resource.pdf Robert E. Kahn, &amp;quot;Resource-Sharing Computer Communications Networks&amp;quot; (1972)]  [http://dx.doi.org/10.1109/PROC.1972.8911 (DOI)]&lt;br /&gt;
* [https://archive.org/details/ComputerNetworks_TheHeraldsOfResourceSharing Computer Networks: The Heralds of Resource Sharing (1972)] - video&lt;br /&gt;
&lt;br /&gt;
The Alto:&lt;br /&gt;
* [https://homeostasis.scs.carleton.ca/~soma/distos/2014w/alto.pdf Thacker et al., &amp;quot;Alto: A Personal computer&amp;quot; (1979)]  ([https://archive.org/details/bitsavers_xeroxparcttoAPersonalComputer_6560658 archive.org])&lt;br /&gt;
&lt;br /&gt;
The Mother of All Demos:&lt;br /&gt;
* [http://www.dougengelbart.org/firsts/dougs-1968-demo.html Doug Engelbart Institute, &amp;quot;Doug&#039;s 1968 Demo&amp;quot;].  You may want to focus on the [http://dougengelbart.org/events/1968-demo-highlights.html highlights] or the [http://sloan.stanford.edu/MouseSite/1968Demo.html annotated clips].&lt;br /&gt;
* [http://en.wikipedia.org/wiki/The_Mother_of_All_Demos Wikipedia&#039;s page on &amp;quot;The Mother of all Demos&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
===January 19, 2015===&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Multics Wikipedia article on Multics]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/fall2008/unix.pdf Dennis M. Ritchie and Ken Thompson, &amp;quot;The UNIX Time-Sharing System&amp;quot; (1974)]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2008-01-21/walker-locus.pdf Bruce Walker et al., &amp;quot;The LOCUS Distributed Operating System.&amp;quot; (1983)]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2008-02-11/sandberg-nfs.pdf Russel Sandberg et al., &amp;quot;Design and Implementation of the Sun Network Filesystem&amp;quot; (1985)]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2008-01-28/ousterhout-sprite.pdf John Ousterhout et al., &amp;quot;The Sprite Network Operating System&amp;quot; (1987)]&lt;br /&gt;
&lt;br /&gt;
===January 26, 2015===&lt;br /&gt;
&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2008-01-21/cheriton-v.pdf David R. Cheriton, &amp;quot;The V Distributed System.&amp;quot; (1988)]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2008-01-28/tanenbaum-amoeba.pdf Andrew Tannenbaum et al., &amp;quot;The Amoeba System&amp;quot; (1990)]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2008-01-28/clouds-dasgupta.pdf Partha Dasgupta et al., &amp;quot;The Clouds Distributed Operating System&amp;quot; (1991)]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2008-02-11/howard-afs.pdf John H. Howard et al., &amp;quot;Scale and Performance in a Distributed File System&amp;quot; (1988)]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Other Readings==&lt;br /&gt;
&lt;br /&gt;
===The Early Web===&lt;br /&gt;
&lt;br /&gt;
* [https://archive.org/details/02Kahle000673 Berners-Lee et al., &amp;quot;World-Wide Web: The Information Universe&amp;quot; (1992)], pp. 52-58&lt;br /&gt;
* [http://www.youtube.com/watch?v=72nfrhXroo8 Alex Wright, &amp;quot;The Web That Wasn&#039;t&amp;quot; (2007)], Google Tech Talk&lt;br /&gt;
&lt;br /&gt;
===Plan 9===&lt;br /&gt;
&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2014w/presotto-plan9.pdf Presotto et. al, Plan 9, A Distributed System (1991)]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/2014w/pike-plan9.pdf Pike et al., Plan 9 from Bell Labs (1995)]&lt;br /&gt;
&lt;br /&gt;
===GFS and Ceph===&lt;br /&gt;
* [http://research.google.com/archive/gfs-sosp2003.pdf Sanjay Ghemawat et al., &amp;quot;The Google File System&amp;quot; (SOSP 2003)]&lt;br /&gt;
* [http://www.usenix.org/events/osdi06/tech/weil.html Weil et al., Ceph: A Scalable, High-Performance Distributed File System (OSDI 2006)].&lt;br /&gt;
&lt;br /&gt;
===Chubby===&lt;br /&gt;
&lt;br /&gt;
* [https://www.usenix.org/legacy/events/osdi06/tech/burrows.html Burrows, The Chubby Lock Service for Loosely-Coupled Distributed Systems (OSDI 2006)]&lt;br /&gt;
&lt;br /&gt;
===Oceanstore===&lt;br /&gt;
 &lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/fall2008/oceanstore-sigplan.pdf John Kubiatowicz et al., &amp;quot;OceanStore: An Architecture for Global-Scale Persistent Storage&amp;quot; (2000)]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/fall2008/fast2003-pond.pdf Sean Rhea et al., &amp;quot;Pond: the OceanStore Prototype&amp;quot; (2003)]&lt;br /&gt;
&lt;br /&gt;
===Farsite===&lt;br /&gt;
&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/fall2008/adya-farsite-intro.pdf Atul Adya et al.,&amp;quot;FARSITE: Federated, Available, and Reliable Storage for an Incompletely Trusted Environment&amp;quot; (2002)]&lt;br /&gt;
* [http://homeostasis.scs.carleton.ca/~soma/distos/fall2008/bolosky-farsite-retro.pdf William J. Bolosky et al., &amp;quot;The Farsite Project: A Retrospective&amp;quot; (2007)]&lt;br /&gt;
&lt;br /&gt;
===Public Resource Computing===&lt;br /&gt;
&lt;br /&gt;
* Anderson et al., &amp;quot;SETI@home: An Experiment in Public-Resource Computing&amp;quot; (CACM 2002) [http://dx.doi.org/10.1145/581571.581573 (DOI)] [http://dl.acm.org.proxy.library.carleton.ca/citation.cfm?id=581573 (Proxy)]&lt;br /&gt;
* Anderson, &amp;quot;BOINC: A System for Public-Resource Computing and Storage&amp;quot; (Grid Computing 2004) [http://dx.doi.org/10.1109/GRID.2004.14 (DOI)] [http://ieeexplore.ieee.org.proxy.library.carleton.ca/stamp/stamp.jsp?tp=&amp;amp;arnumber=1382809 (Proxy)]&lt;br /&gt;
&lt;br /&gt;
===Distributed Hash Tables===&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Distributed_hash_table Wikipedia&#039;s article on Distributed Hash Tables]&lt;br /&gt;
* [http://pdos.csail.mit.edu/~strib/docs/tapestry/tapestry_jsac03.pdf Zhao et al, &amp;quot;Tapestry: A Resilient Global-Scale Overlay for Service Deployment&amp;quot; (JSAC 2003)]&lt;br /&gt;
&lt;br /&gt;
===Structured Data===&lt;br /&gt;
&lt;br /&gt;
* [http://research.google.com/archive/bigtable-osdi06.pdf Chang et al., &amp;quot;BigTable: A Distributed Storage System for Structured Data&amp;quot; (OSDI 2006)]&lt;br /&gt;
* [http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf DeCandia et al., &amp;quot;Dynamo: Amazon’s Highly Available Key-value Store&amp;quot; (SOSP 2007)]&lt;br /&gt;
* [http://www.cs.cornell.edu/projects/ladis2009/papers/lakshman-ladis2009.pdf Lakshman &amp;amp; Malik, &amp;quot;Cassandra - A Decentralized Structured Storage System&amp;quot; (LADIS 2009)]&lt;br /&gt;
* [https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Geambasu.pdf Geambasu et al., &amp;quot;Comet: An active distributed key-value store&amp;quot; (OSDI 2010)]&lt;br /&gt;
&lt;br /&gt;
===Specialized Storage===&lt;br /&gt;
&lt;br /&gt;
* [http://static.usenix.org/legacy/events/osdi10/tech/full_papers/Beaver.pdf Beaver et al., &amp;quot;Finding a needle in Haystack: Facebook’s photo storage&amp;quot; (OSDI 2010)]&lt;br /&gt;
&lt;br /&gt;
===Computational Models===&lt;br /&gt;
&lt;br /&gt;
* [http://research.google.com/archive/mapreduce.html Dean &amp;amp; Ghemawat, &amp;quot;MapReduce: Simplified Data Processing on Large Clusters&amp;quot; (OSDI 2004)]&lt;br /&gt;
* [http://dl.acm.org/citation.cfm?doid=2517349.2522738 Murray et al., &amp;quot;Naiad: a timely dataflow system&amp;quot; (SOSP 2013)]&lt;br /&gt;
&lt;br /&gt;
===Literature Review Help===&lt;br /&gt;
&lt;br /&gt;
* Harvey, &amp;quot;What Is a Literature Review?&amp;quot; [http://www.cs.cmu.edu/~missy/WritingaLiteratureReview.doc (DOC)] [http://www.cs.cmu.edu/~missy/Writing_a_Literature_Review.ppt (PPT)]&lt;br /&gt;
* [http://www.writing.utoronto.ca/advice/specific-types-of-writing/literature-review Taylor, &amp;quot;The Literature Review: A Few Tips On Conducting It&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot; class=&amp;quot;wikitable&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr valign=&amp;quot;top&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Date&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Topic&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Jan. 5&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 1|Session 1]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Jan. 12&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2014W Session 2|Session 2]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Jan. 19&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 3|Session 3]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Jan. 26&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 4|Session 4]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Feb. 2&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 5|Session 5]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Feb. 9&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 6|Session 6]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Feb. 23&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 7|Session 7]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Mar. 2&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 8|Session 8]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Mar. 9&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 9|Session 9]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Mar. 16&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 10|Session 10]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Mar. 23&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 11|Session 11]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Mar. 30&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Session 12|Session 12]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Apr. 6&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[DistOS 2015W Project Presentations|Final Project Presentations]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;TBA&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;&#039;&#039;&#039;Final Exam&#039;&#039;&#039;&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;/div&gt;</summary>
		<author><name>Aimbot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_2&amp;diff=19669</id>
		<title>DistOS 2015W Session 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_2&amp;diff=19669"/>
		<updated>2015-01-14T03:15:49Z</updated>

		<summary type="html">&lt;p&gt;Aimbot: /* Professor lead discussion 1 */ Updated time sharing definition.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Reading Response Discussion =&lt;br /&gt;
&lt;br /&gt;
== Mother of All Demos: ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Jason, Kirill, Sravan, Agustin, Hassan Ambalica, Apoorv, Khaled&lt;br /&gt;
&lt;br /&gt;
* 1968, lead by Doug Inglebard&lt;br /&gt;
* Initial public display of many modern technologies&lt;br /&gt;
* One computer with multiple remote terminals&lt;br /&gt;
* Video conferencing&lt;br /&gt;
* Computer mouse (and coined the term)&lt;br /&gt;
* Word processing, rudimentary copy and paste&lt;br /&gt;
* Dynamic file linking (hypertext)&lt;br /&gt;
* Revision control/version control/source control&lt;br /&gt;
* Collaborative real-time editor&lt;br /&gt;
** User privilege control in that user can provide read-only access, read-write privilege to file&lt;br /&gt;
* Corded keyboard&lt;br /&gt;
** Marco keyboard that allows messages quickly, while using mouse at same time&lt;br /&gt;
* Not really a distributed operating system, but great start because multiple users at different terminals could share same resources&lt;br /&gt;
&lt;br /&gt;
== Early Internet ==&lt;br /&gt;
&lt;br /&gt;
== Alto ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Tory, Veena, Sameer, Mert, Deep, Nameet, Moe&lt;br /&gt;
&lt;br /&gt;
* Initially developed around 1973 by Xerox, upgraded over next few years&lt;br /&gt;
* High speed network connectivity (3 Mbps @ 1km distance)&lt;br /&gt;
* Connected up to 256 machines&lt;br /&gt;
* Protocol similar to cross between UDP/TCP, before TCP invented&lt;br /&gt;
* Allowed sharing of printers&lt;br /&gt;
* Also allowed distribution of files across computers (redundancy/reliability)&lt;br /&gt;
* Sort of early cloud&lt;br /&gt;
* Allowed for remote debugging, store error logs&lt;br /&gt;
* Allowed machines to use processing power of others&lt;br /&gt;
* Much of the time it would be idling, which is amazing at a time when computers cost a fortune&lt;br /&gt;
&lt;br /&gt;
= Professor lead discussion 1 = &lt;br /&gt;
&lt;br /&gt;
A true distributed operating system does not actually exist, its more of a dream. &lt;br /&gt;
&lt;br /&gt;
You&#039;ll find in the history of trying to achieve this dream of a distributed operating system there has always been a road block as a result of a technical issue. People would come up with a solution to this technical issue but then they would find that some other technical issue would crop up. For example &#039;The Mother of All Demos&#039; tried to build a distributed operating system but they had no networking. They had to point a television camera at the computer monitor to be able to demo their concept. The early internet came along and dealt with the issue of no networking. However the early internet itself had other technical issues.&lt;br /&gt;
&lt;br /&gt;
A common buzz word during the early days of development was &#039;&#039;&#039;time sharing&#039;&#039;&#039;, which is an old term for multiple processes running simultaneously. However, at the time it referred to multiple users sharing the CPU cycles on a single computer. Today, a single user&#039;s many processes using a single CPU is much more common.&lt;br /&gt;
&lt;br /&gt;
= Discussion: Easy on one computer, hard on multiple computers =&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Instant Messaging&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|-&lt;br /&gt;
| Can&#039;t be out sync&lt;br /&gt;
| Synchronization issues&lt;br /&gt;
* Server-client modal&lt;br /&gt;
** If two or more servers they get requests same time, they have to figure out how to synchronize the data&lt;br /&gt;
* Peer-to-Peer&lt;br /&gt;
** Each peer has to figure out how to sync each message&lt;br /&gt;
|-&lt;br /&gt;
| Only one data store/source&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Mortal Kombat/Twitch Gaming&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Photo Album&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Commonalities ===&lt;br /&gt;
* Synchronization&lt;br /&gt;
* Bandwidth&lt;br /&gt;
* Reliability/Fault Tolerance&lt;br /&gt;
* Interoperability&lt;br /&gt;
* Discovery&lt;br /&gt;
* Routing&lt;br /&gt;
** In modern system this issue has been mostly abstracted away&lt;br /&gt;
** Classic example would be Wireless Ad-hoc networking&lt;br /&gt;
&lt;br /&gt;
The above list is not hard on a single system because all the cores have equal access to all the resources. Also as a result of excellent engineering modern single systems can very effective deal with any errors that may result.&lt;/div&gt;</summary>
		<author><name>Aimbot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_2&amp;diff=19668</id>
		<title>DistOS 2015W Session 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_2&amp;diff=19668"/>
		<updated>2015-01-14T03:09:38Z</updated>

		<summary type="html">&lt;p&gt;Aimbot: /* Mother of All Demos: */ Minor updates, clarifications&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Reading Response Discussion =&lt;br /&gt;
&lt;br /&gt;
== Mother of All Demos: ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Jason, Kirill, Sravan, Agustin, Hassan Ambalica, Apoorv, Khaled&lt;br /&gt;
&lt;br /&gt;
* 1968, lead by Doug Inglebard&lt;br /&gt;
* Initial public display of many modern technologies&lt;br /&gt;
* One computer with multiple remote terminals&lt;br /&gt;
* Video conferencing&lt;br /&gt;
* Computer mouse (and coined the term)&lt;br /&gt;
* Word processing, rudimentary copy and paste&lt;br /&gt;
* Dynamic file linking (hypertext)&lt;br /&gt;
* Revision control/version control/source control&lt;br /&gt;
* Collaborative real-time editor&lt;br /&gt;
** User privilege control in that user can provide read-only access, read-write privilege to file&lt;br /&gt;
* Corded keyboard&lt;br /&gt;
** Marco keyboard that allows messages quickly, while using mouse at same time&lt;br /&gt;
* Not really a distributed operating system, but great start because multiple users at different terminals could share same resources&lt;br /&gt;
&lt;br /&gt;
== Early Internet ==&lt;br /&gt;
&lt;br /&gt;
== Alto ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Tory, Veena, Sameer, Mert, Deep, Nameet, Moe&lt;br /&gt;
&lt;br /&gt;
* Initially developed around 1973 by Xerox, upgraded over next few years&lt;br /&gt;
* High speed network connectivity (3 Mbps @ 1km distance)&lt;br /&gt;
* Connected up to 256 machines&lt;br /&gt;
* Protocol similar to cross between UDP/TCP, before TCP invented&lt;br /&gt;
* Allowed sharing of printers&lt;br /&gt;
* Also allowed distribution of files across computers (redundancy/reliability)&lt;br /&gt;
* Sort of early cloud&lt;br /&gt;
* Allowed for remote debugging, store error logs&lt;br /&gt;
* Allowed machines to use processing power of others&lt;br /&gt;
* Much of the time it would be idling, which is amazing at a time when computers cost a fortune&lt;br /&gt;
&lt;br /&gt;
= Professor lead discussion 1 = &lt;br /&gt;
&lt;br /&gt;
A true distributed operating system does not actually exist, its more of a dream. &lt;br /&gt;
&lt;br /&gt;
You&#039;ll find in the history of trying to achieve this dream of a distributed operating system there has always been a road block as a result of a technical issue. People would come up with a solution to this technical issue but then they would find that some other technical issue would crop up. For example &#039;The Mother of All Demos&#039; tried to build a distributed operating system but they had no networking. They had to point a television camera at the computer monitor to be able to demo their concept. The early internet came along and dealt with the issue of no networking. However the early internet itself had other technical issues.&lt;br /&gt;
&lt;br /&gt;
A common buzz word during the early days of development was &#039;&#039;&#039;Time Sharing&#039;&#039;&#039;. Time sharing is about the sharing of resources between users. Today the concept of time sharing is similar to sharing a processor between multiple processes.&lt;br /&gt;
&lt;br /&gt;
= Discussion: Easy on one computer, hard on multiple computers =&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Instant Messaging&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|-&lt;br /&gt;
| Can&#039;t be out sync&lt;br /&gt;
| Synchronization issues&lt;br /&gt;
* Server-client modal&lt;br /&gt;
** If two or more servers they get requests same time, they have to figure out how to synchronize the data&lt;br /&gt;
* Peer-to-Peer&lt;br /&gt;
** Each peer has to figure out how to sync each message&lt;br /&gt;
|-&lt;br /&gt;
| Only one data store/source&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Mortal Kombat/Twitch Gaming&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Photo Album&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Commonalities ===&lt;br /&gt;
* Synchronization&lt;br /&gt;
* Bandwidth&lt;br /&gt;
* Reliability/Fault Tolerance&lt;br /&gt;
* Interoperability&lt;br /&gt;
* Discovery&lt;br /&gt;
* Routing&lt;br /&gt;
** In modern system this issue has been mostly abstracted away&lt;br /&gt;
** Classic example would be Wireless Ad-hoc networking&lt;br /&gt;
&lt;br /&gt;
The above list is not hard on a single system because all the cores have equal access to all the resources. Also as a result of excellent engineering modern single systems can very effective deal with any errors that may result.&lt;/div&gt;</summary>
		<author><name>Aimbot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_2&amp;diff=19667</id>
		<title>DistOS 2015W Session 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_2&amp;diff=19667"/>
		<updated>2015-01-14T03:03:52Z</updated>

		<summary type="html">&lt;p&gt;Aimbot: /* Alto */ Clarifications, minor updates&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Reading Response Discussion =&lt;br /&gt;
&lt;br /&gt;
== Mother of All Demos: ==&lt;br /&gt;
&lt;br /&gt;
Team: Jason, Kirill, Sravan, Agustin, Hassan Ambalica, Apoorv, Khaled&lt;br /&gt;
&lt;br /&gt;
* 1968, Doug Inglebard&lt;br /&gt;
* One computer with multiple terminals controlling&lt;br /&gt;
* Video conferencing&lt;br /&gt;
* Computer mouse&lt;br /&gt;
* Word processing, rudimentary copy and paste&lt;br /&gt;
* Dynamic file linking&lt;br /&gt;
* Revision control/version control/source control&lt;br /&gt;
* Collaborative real-time editor&lt;br /&gt;
** User privilege control in that user can provide read-only access, read-write privilege to file&lt;br /&gt;
* Cord keyboard&lt;br /&gt;
** Marco keyboard that allows messages quickly, while using mouse at same time&lt;br /&gt;
* Not really a distributed operating system, but great start because multiple users at different terminals could share same resources&lt;br /&gt;
&lt;br /&gt;
== Early Internet ==&lt;br /&gt;
&lt;br /&gt;
== Alto ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Tory, Veena, Sameer, Mert, Deep, Nameet, Moe&lt;br /&gt;
&lt;br /&gt;
* Initially developed around 1973 by Xerox, upgraded over next few years&lt;br /&gt;
* High speed network connectivity (3 Mbps @ 1km distance)&lt;br /&gt;
* Connected up to 256 machines&lt;br /&gt;
* Protocol similar to cross between UDP/TCP, before TCP invented&lt;br /&gt;
* Allowed sharing of printers&lt;br /&gt;
* Also allowed distribution of files across computers (redundancy/reliability)&lt;br /&gt;
* Sort of early cloud&lt;br /&gt;
* Allowed for remote debugging, store error logs&lt;br /&gt;
* Allowed machines to use processing power of others&lt;br /&gt;
* Much of the time it would be idling, which is amazing at a time when computers cost a fortune&lt;br /&gt;
&lt;br /&gt;
= Professor lead discussion 1 = &lt;br /&gt;
&lt;br /&gt;
A true distributed operating system does not actually exist, its more of a dream. &lt;br /&gt;
&lt;br /&gt;
You&#039;ll find in the history of trying to achieve this dream of a distributed operating system there has always been a road block as a result of a technical issue. People would come up with a solution to this technical issue but then they would find that some other technical issue would crop up. For example &#039;The Mother of All Demos&#039; tried to build a distributed operating system but they had no networking. They had to point a television camera at the computer monitor to be able to demo their concept. The early internet came along and dealt with the issue of no networking. However the early internet itself had other technical issues.&lt;br /&gt;
&lt;br /&gt;
A common buzz word during the early days of development was &#039;&#039;&#039;Time Sharing&#039;&#039;&#039;. Time sharing is about the sharing of resources between users. Today the concept of time sharing is similar to sharing a processor between multiple processes.&lt;br /&gt;
&lt;br /&gt;
= Discussion: Easy on one computer, hard on multiple computers =&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Instant Messaging&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|-&lt;br /&gt;
| Can&#039;t be out sync&lt;br /&gt;
| Synchronization issues&lt;br /&gt;
* Server-client modal&lt;br /&gt;
** If two or more servers they get requests same time, they have to figure out how to synchronize the data&lt;br /&gt;
* Peer-to-Peer&lt;br /&gt;
** Each peer has to figure out how to sync each message&lt;br /&gt;
|-&lt;br /&gt;
| Only one data store/source&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Mortal Kombat/Twitch Gaming&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot; &lt;br /&gt;
|+ &#039;&#039;&#039;Photo Album&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Why is it easy?&lt;br /&gt;
! Why is it hard?&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Commonalities ===&lt;br /&gt;
* Synchronization&lt;br /&gt;
* Bandwidth&lt;br /&gt;
* Reliability/Fault Tolerance&lt;br /&gt;
* Interoperability&lt;br /&gt;
* Discovery&lt;br /&gt;
* Routing&lt;br /&gt;
** In modern system this issue has been mostly abstracted away&lt;br /&gt;
** Classic example would be Wireless Ad-hoc networking&lt;br /&gt;
&lt;br /&gt;
The above list is not hard on a single system because all the cores have equal access to all the resources. Also as a result of excellent engineering modern single systems can very effective deal with any errors that may result.&lt;/div&gt;</summary>
		<author><name>Aimbot</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_1&amp;diff=19625</id>
		<title>DistOS 2015W Session 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_1&amp;diff=19625"/>
		<updated>2015-01-11T20:05:17Z</updated>

		<summary type="html">&lt;p&gt;Aimbot: General clean-up (grammar, spelling, wiki-markup, etc)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Notes for the first session that happened on Jan. 5th, 2015.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Course Outline =&lt;br /&gt;
&lt;br /&gt;
Undergrad Grading Scheme&lt;br /&gt;
* 15% Class Participation &lt;br /&gt;
* 15% Reading Responses&lt;br /&gt;
* 10% lecture Notes/Wiki Contributions &lt;br /&gt;
* 25% Midterm &lt;br /&gt;
* 35% Final Exam &lt;br /&gt;
&lt;br /&gt;
Grads Grading Scheme&lt;br /&gt;
* 15% Class Participation &lt;br /&gt;
* 15% Reading Responses&lt;br /&gt;
* 10% Lectures Notes/Wiki contributions&lt;br /&gt;
* 10% Project Proposal &lt;br /&gt;
* 15% Project Presentation &lt;br /&gt;
* 35% Final Project &lt;br /&gt;
&lt;br /&gt;
=== Project ===&lt;br /&gt;
* A literature review of distributing operations&lt;br /&gt;
* Research proposal on a problem related to distributed systems &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Discussion =&lt;br /&gt;
&lt;br /&gt;
== Q: What do you think of when you hear &#039;Distributed System&#039;? ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Group 1 ===&lt;br /&gt;
* Sharing Resources&lt;br /&gt;
* Spreading Work Loads&lt;br /&gt;
* Scheduling&lt;br /&gt;
* Process migration&lt;br /&gt;
* Different nodes have different purpose&lt;br /&gt;
* Google&lt;br /&gt;
* Parallel running processes&lt;br /&gt;
* Nodes&lt;br /&gt;
* Resource Allocation across multiple nodes&lt;br /&gt;
* Scheduling multiple nodes&lt;br /&gt;
* Resources availability among nodes&lt;br /&gt;
* Problem: across multiple machine&lt;br /&gt;
&lt;br /&gt;
=== Group 2 ===&lt;br /&gt;
* Request comes from (usually) one computer and processing is usually handled by more than one computer&lt;br /&gt;
* Tasks are divided into small parts which can be processed individually before coming back together&lt;br /&gt;
* Usually deals with large scale data sets&lt;br /&gt;
* Globalization&lt;br /&gt;
* Fault tolerance&lt;br /&gt;
* Usually emphasize distributed agents to not work on certain schedule&lt;br /&gt;
&lt;br /&gt;
=== Group 3 ===&lt;br /&gt;
* Separated, networked machines&lt;br /&gt;
* Coordinated by similar or identical software &lt;br /&gt;
* Error recovery/redundancy &lt;br /&gt;
* No controlled storage &lt;br /&gt;
* Coordinated communication facilitating operation on coordinated task &lt;br /&gt;
* Leader/hierarchy for task delegation &lt;br /&gt;
* Examples: Map Reduce, cloud, cloud software (Google Drive)&lt;br /&gt;
&lt;br /&gt;
=== Group 4 ===&lt;br /&gt;
* Distributed, multiple systems&lt;br /&gt;
* OS: The root level system that operates a computer system &lt;br /&gt;
* Dist. OS more complexity &lt;br /&gt;
&lt;br /&gt;
=== Prof Discussion Notes ===&lt;br /&gt;
Key words: network, parallel, fault tolerant, redundancy, complexity&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
== Distributed OS is kind of OS ==&lt;br /&gt;
	&lt;br /&gt;
	&lt;br /&gt;
	&lt;br /&gt;
&#039;&#039;&#039;But what is an OS in the first place?&#039;&#039;&#039;&lt;br /&gt;
* Connection software and hardware&lt;br /&gt;
* Resource allocation&lt;br /&gt;
* Abstraction layers&lt;br /&gt;
* Makes it easy to run higher level programs on different computers&lt;br /&gt;
* Sharing: resources are split up and each process is isolated, this improves security and programming (no need to worry about sharing)&lt;br /&gt;
* Virtual memory, process scheduling&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;What is a distributed system?&#039;&#039;&#039;&lt;br /&gt;
* Feels like it is only a single node, but runs on many&lt;br /&gt;
* Do the things users take for granted on regular computers&lt;br /&gt;
* Doesn&#039;t actually work as a single (general use) computer - with 1000 computers one would expect it to be like a single computer with 1000 times the resources, but it&#039;s not&lt;br /&gt;
* In a way, opposite of an operating system: instead of splitting up a single computer into many, it takes many and tries to merge into one&lt;br /&gt;
* Resources can be divided as you want &lt;br /&gt;
* Centralized control solves some issues, but has issues of its own&lt;br /&gt;
* Since it cannot truly act as a single computer, we fake it as much as possible - and it works in some scenarios&lt;br /&gt;
* The more specialized the task, the better it will scale&lt;br /&gt;
* Interact it, process update, callback, &lt;br /&gt;
* Gmail so responsible   - it perfected , downloads in your browser &lt;br /&gt;
* Cache  - they predict what the processor going to do &lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
== Operating System Examples ==&lt;br /&gt;
&#039;&#039;&#039;Mobile devices - Phones&#039;&#039;&#039;&lt;br /&gt;
* iOS&lt;br /&gt;
* Android&lt;br /&gt;
	&lt;br /&gt;
&#039;&#039;&#039;Embedded OS&#039;&#039;&#039; &lt;br /&gt;
* Linux&lt;br /&gt;
* QNX&lt;br /&gt;
* xBSD firewalls &lt;br /&gt;
	&lt;br /&gt;
&#039;&#039;&#039;Desktop&#039;&#039;&#039;&lt;br /&gt;
* Windows &lt;br /&gt;
* OSX&lt;br /&gt;
* Chrome OS&lt;br /&gt;
	&lt;br /&gt;
&#039;&#039;&#039;Server&#039;&#039;&#039;&lt;br /&gt;
* Windows&lt;br /&gt;
* Linux&lt;br /&gt;
* BSD&lt;br /&gt;
	&lt;br /&gt;
&#039;&#039;&#039;Main Frames&#039;&#039;&#039;&lt;br /&gt;
* OS/400&lt;br /&gt;
	&lt;br /&gt;
Is the cloud an OS? Important question.&lt;br /&gt;
	&lt;br /&gt;
&#039;&#039;&#039;Cloud&#039;&#039;&#039;&lt;br /&gt;
* MPI&lt;br /&gt;
* AWS&lt;br /&gt;
* Google App Engine &lt;br /&gt;
	&lt;br /&gt;
No, they are at best proto-OSs because the abstractions they provide are very leaky. They only provide limited APIs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pick a system and show how it is and is not an OS ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Group 1 ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot;&lt;br /&gt;
|+ &#039;&#039;&#039;BOINC&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Similarities to a traditional OS&lt;br /&gt;
! Differences from a traditional OS&lt;br /&gt;
|-&lt;br /&gt;
| Very parallel&lt;br /&gt;
| Can handle only very specific (trivially parallelizable) types of problems&lt;br /&gt;
|-&lt;br /&gt;
| Availability&lt;br /&gt;
| Abstraction layer is poor&lt;br /&gt;
|-&lt;br /&gt;
| Scheduling&lt;br /&gt;
|-&lt;br /&gt;
| Networked&lt;br /&gt;
|-&lt;br /&gt;
| Nodes may come and leave as they like (it is active when the computer is idle)&lt;br /&gt;
|-&lt;br /&gt;
| Same problem at the same time&lt;br /&gt;
|-&lt;br /&gt;
| Redundant/Fault tolerant&lt;br /&gt;
|-&lt;br /&gt;
| Allocation can be handle by system&lt;br /&gt;
|-&lt;br /&gt;
| Large problem managed by one system&lt;br /&gt;
|-&lt;br /&gt;
| Anyone can submit projects&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Group 2 ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot;&lt;br /&gt;
|+ &#039;&#039;&#039;Facebook&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Similarities to a traditional OS&lt;br /&gt;
! Differences from a traditional OS&lt;br /&gt;
|-&lt;br /&gt;
| OK abstraction (for human communication/control)&lt;br /&gt;
| Doesn&#039;t really take a single resources and split it up into smaller ones &lt;br /&gt;
|-&lt;br /&gt;
| The programming API is stable&lt;br /&gt;
| The human API (user interface) is not stable&lt;br /&gt;
|-&lt;br /&gt;
| Able to separate resources such as wall posts, photos, etc&lt;br /&gt;
| Limited control&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
	&lt;br /&gt;
=== Group 3 ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot;&lt;br /&gt;
|+ &#039;&#039;&#039;Google Docs (Drive)&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Similarities to a traditional OS&lt;br /&gt;
! Differences from a traditional OS&lt;br /&gt;
|-&lt;br /&gt;
| Has a file system&lt;br /&gt;
| Somehow still not a true OS&lt;br /&gt;
|-&lt;br /&gt;
| Provides hardware abstraction (users don&#039;t care how the requests are carried out)&lt;br /&gt;
|-&lt;br /&gt;
| Has an API&lt;br /&gt;
|}&lt;br /&gt;
	&lt;br /&gt;
=== Group 4 ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; width=&amp;quot;100%&amp;quot;&lt;br /&gt;
|+ &#039;&#039;&#039;LinkedIn&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
! Similarities to a traditional OS&lt;br /&gt;
! Differences from a traditional OS&lt;br /&gt;
|-&lt;br /&gt;
| Supports multiple location/users&lt;br /&gt;
| Specific functionality (not abstract)&lt;br /&gt;
|-&lt;br /&gt;
| Multiple servers around the world&lt;br /&gt;
|-&lt;br /&gt;
| Resource/Security management&lt;br /&gt;
|-&lt;br /&gt;
| Networked&lt;br /&gt;
|-&lt;br /&gt;
| Cross-platform (Desktop/Mobile)&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Aimbot</name></author>
	</entry>
</feed>