<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mohamedahmed</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mohamedahmed"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Mohamedahmed"/>
	<updated>2026-05-13T13:29:07Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_4&amp;diff=20214</id>
		<title>DistOS 2015W Session 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_4&amp;diff=20214"/>
		<updated>2015-04-28T05:38:23Z</updated>

		<summary type="html">&lt;p&gt;Mohamedahmed: /* Capablities: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Andrew File System =&lt;br /&gt;
AFS (Andrew File System) was set up as a direct response to NFS. Essentially universities found issues when they tried to scale NFS in a way that would allow them to share files amongst their staff effectively. AFS was more scalable than NFS because read-write operations happened locally before they were committed to the server (data store).&lt;br /&gt;
&lt;br /&gt;
Since AFS copies files locally when they were opened and only sends the data back when they are closed, all operations between opening and closing the file are very fast and do not need to access the network. NFS works with files remotely, so there is no data to transfer when opening/closing the file, making those operations instant.&lt;br /&gt;
&lt;br /&gt;
There are several problems with this design, however:&lt;br /&gt;
* The local system must have enough space to temporarily store the file.&lt;br /&gt;
* Opening and closing the files requires a lot of bandwidth for large files. To read even a single byte, the entire file must be retrieved (later versions remedied this).&lt;br /&gt;
* If the close operation fails, the system will not have the updated version of the file. Many programs are designed around local filesystems, and therefore don&#039;t even check the return value of the close operation (as this is unlikely to fail on a local FS), giving users the false impression that everything went well.&lt;br /&gt;
&lt;br /&gt;
Given all this, AFS was suitable for working with small files, not large ones, limiting its usefulness. It is also notoriously annoying to set up as it is geared towards university-sized networks, further limiting its success.&lt;br /&gt;
&lt;br /&gt;
*Kerberos protocol&lt;br /&gt;
**authentication protocol using time based ticket&lt;br /&gt;
**single sign on system to authenticate and use other services&lt;br /&gt;
**AFS uses Kerberos for authentication, and implements access control lists on directories for users and groups.&lt;br /&gt;
&lt;br /&gt;
= Amoeba Operating System =&lt;br /&gt;
&lt;br /&gt;
The Ameoba research project with the goal at understanding how to connect multiple computers in a seamless way. The main goal of this was to build a distributed system that is transparent to the users. This was different from a network operating system where a user would be aware of the separate nodes it was accessing. &lt;br /&gt;
&lt;br /&gt;
=== Capablities: ===&lt;br /&gt;
* Pointer to the object&lt;br /&gt;
* Capability assigning right to perform to some operation to the object ticket &lt;br /&gt;
* Communicate wide area network &lt;br /&gt;
* a kind of ticket or key that allows the holder of the capa- bility to perform some (not neces- sarily all) &lt;br /&gt;
* Each user process owns some collection of capabilities, which together define the set of objects it may access and the types of operations that my ne performed on each &lt;br /&gt;
* After the server has performed the operation, it sends back a reply message that unblocks the client &lt;br /&gt;
* Sending messages, blocking and accepting forms the remote procedure call that can be encapsulate using to make entire remote operation look like local procedure &lt;br /&gt;
* Second field:  used by the sever to identify which of its objects is being addressed server port and object number identify object which operation to performed  &lt;br /&gt;
* Generates 48-bit random number     &lt;br /&gt;
* The third field is the right field which contains a bit map telling which operation the holder of the capability  may performed&lt;br /&gt;
* X11 Window management&lt;br /&gt;
&lt;br /&gt;
=== Thread Management: ===&lt;br /&gt;
* Same process have multiple thread and each process has its own registered counter and stack &lt;br /&gt;
* Behave like process&lt;br /&gt;
* It can synchronized using mutex semaphore &lt;br /&gt;
* File: Multiple thread, &lt;br /&gt;
* Blocked when there&#039;s multiple threads &lt;br /&gt;
* Buttlet thread the mutex&lt;br /&gt;
* The careful reader may have noticed that user process can pull 813kbytes/sec&lt;br /&gt;
&lt;br /&gt;
= Unique features =&lt;br /&gt;
&lt;br /&gt;
== Pool processors ==&lt;br /&gt;
Pool processors are group of CPUs that are dynamically allocated as user needs. When a program is executed, any of the available processor run.&lt;br /&gt;
&lt;br /&gt;
== Supported architectures ==&lt;br /&gt;
Many different processor architectures are supported including:&lt;br /&gt;
* i80386 (Pentium)&lt;br /&gt;
* 68K&lt;br /&gt;
* SPARC&lt;br /&gt;
&lt;br /&gt;
= The V Distributed System = &lt;br /&gt;
&lt;br /&gt;
* First tent in V design: High Performance communication is the most critical facility for distributed systems.&lt;br /&gt;
* Second; The protocols, not the software, define the system.&lt;br /&gt;
* Third; a relatively small operating system kernel can implement the basic protocols and services providing a simple network-transparent process, address space &amp;amp; communication model.&lt;br /&gt;
&lt;br /&gt;
=== Ideas that significantly affected the design ===&lt;br /&gt;
* Shared Memory.&lt;br /&gt;
* Dealing with group of entities same as they deal w/individual entities.&lt;br /&gt;
* Efficient file caching mechanism using the virtual memory caching mechanism.&lt;br /&gt;
&lt;br /&gt;
=== Design Decisions ===&lt;br /&gt;
* Designed for a cluster of workstations with high speed network access ( only really supports LAN ).&lt;br /&gt;
* Abstract the physical architecture of the participating workstations, by defining common protocols providing well-defined interfaces.&lt;br /&gt;
&lt;br /&gt;
V was run on LAN, and its developers developed a really fast IPC protocol which allowed for it to be a fasted distributed operating system in a small geographic area. Aside from the IPC protocols, V also implemented RPC calls in the background.&lt;br /&gt;
&lt;br /&gt;
V uses the strong consistency model. This model can cause issues with concurrency because in V files are a memory space. Thus two different users accessing the same file and in fact accessing the same memory location. This could result in issues unless there is an effective implementation to deal with multiple versions, etc.&lt;br /&gt;
&lt;br /&gt;
VMTP protocol was used for communication. It supports request-respond behavior. Besides, it provides transparency, group communication facility and flow control. It is pretty much like TCP.&lt;/div&gt;</summary>
		<author><name>Mohamedahmed</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20213</id>
		<title>DistOS 2015W Session 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20213"/>
		<updated>2015-04-28T05:14:20Z</updated>

		<summary type="html">&lt;p&gt;Mohamedahmed: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Multics =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Sameer, Shivjot, Ambalica, Veena&lt;br /&gt;
&lt;br /&gt;
It came into being in the 1960s and it completely vanished in 2000s. It was started by Bell, General Electric and MIT but Bell backed out of the project in 1969.&lt;br /&gt;
Multics is a time sharing OS which provides multitasking and multiprogramming. It is not a distributed OS but it a centralized system which was written in machine-specific assembly.&lt;br /&gt;
&lt;br /&gt;
It provides the following features:&lt;br /&gt;
# Utility Computing&lt;br /&gt;
# Access Control Lists&lt;br /&gt;
# Single level storage&lt;br /&gt;
# Dynamic linking&lt;br /&gt;
#* Sharded libraries or files can be loaded and linked to Random Access Memory at run time&lt;br /&gt;
# Hot swapping&lt;br /&gt;
# Multiprocessing System&lt;br /&gt;
# Ring oriented Security&lt;br /&gt;
#* It provides number of levels of authorization within the computer system&lt;br /&gt;
#* Still present in some form today, inside both processors (like x86) and operating systems&lt;br /&gt;
&lt;br /&gt;
= Unix =&lt;br /&gt;
&lt;br /&gt;
* general purpose multi-user interactive operating system&lt;br /&gt;
* relatively inexpensive&lt;br /&gt;
* includes text editors, interpreters and compilers&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unix in its original conception is a small, minimal API system designed by two guys from Bell Labs. It was essentially an OS that was optimized for the needs of programmers but not much beyond that. The UNIX OS ran on one computer, and terminals ran from that one computer. Thus it is not a distributed operating system as it is centralized and implements time sharing. In fact, it didn&#039;t even have support for networking in the first version.&lt;br /&gt;
&lt;br /&gt;
The C language was created specifically for Unix, as the creators wanted to create a machine-agnostic language for the operating system.&lt;br /&gt;
&lt;br /&gt;
Most features from Unix are still available in present day Unix-based systems. For example, the shell, with its piping capabilities, is still used today in its original form.&lt;br /&gt;
&lt;br /&gt;
= NFS =&lt;br /&gt;
&lt;br /&gt;
NFS is a protocol for working with distributed file systems transparently using RPC. These connections are not secure. Sun wanted to encrypt these RPC connections but encryption would result in government regulations that Sun wanted to avoid in order to sell NFS over seas.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Sun wanted to secure their NFS with encryption, but at the time encryption was regulated like munitions in the United States. Exporting any product that had encryption was impossible, but Sun needed those sales abroad. To avoid these regulations, Sun decided to sell the insecure NFS version of the system.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Mert&lt;br /&gt;
&lt;br /&gt;
*Sun used &amp;quot;open protocol&amp;quot; approach to develop NFS&lt;br /&gt;
** Simply specified the exact message formats that clients and servers would use to communicate.&lt;br /&gt;
** Different groups could develop their own NFS servers and thus compete in an NFS marketplace.&lt;br /&gt;
** ex. Sun, NetApp, EMC, IBM, etc.&lt;br /&gt;
&lt;br /&gt;
*NFSv2, simple and fast server crash recovery&lt;br /&gt;
**Any minute that the server is down (or unavailable) makes all the clients unproductive.&lt;br /&gt;
**The protocol is designed to deliver in each protocol request all the information that is needed in order to complete the request.&lt;br /&gt;
**stateless approach: the server does not track anything about what clients are doing&lt;br /&gt;
**no fancy crash recovery is needed, the server just starts running again, and a client, at worst, might have to retry a request.&lt;br /&gt;
&lt;br /&gt;
= Locus =&lt;br /&gt;
&lt;br /&gt;
# Not scalable&lt;br /&gt;
#* The synchronization algorithms were so slow, they only managed to run it on five computers&lt;br /&gt;
#* Every computer stores a copy of every file&lt;br /&gt;
#* Also used CAS to manage files&lt;br /&gt;
# Not efficient with abstractions&lt;br /&gt;
#* Trying to distribute files and processes&lt;br /&gt;
#Allowed for process migration&lt;br /&gt;
#Transparency&lt;br /&gt;
#* It provided network transparency  to “disguise” its distributed context.&lt;br /&gt;
#Dynamic reconfiguration. (It adapts topology changes.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Locus has lots of similarities with the today&#039;s systems. It uses replication and partitioning which are employed in cloud and distributed systems.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Eric&lt;br /&gt;
&lt;br /&gt;
*LOCUS was capable of distribute file and processes among the nodes but not the computing power.&lt;br /&gt;
&lt;br /&gt;
* distributed operating system which supports transparent access to  data through a network wide filesystem&lt;br /&gt;
* upward compatible with Unix&lt;br /&gt;
* automatic replication of storage (backups), distributed process execution (cloud computing?)&lt;br /&gt;
&lt;br /&gt;
= Sprite =&lt;br /&gt;
&lt;br /&gt;
Mohamed&lt;br /&gt;
&lt;br /&gt;
* OS for networked uniprocessor and multiprocessor workstations with large physical memories&lt;br /&gt;
*shared memory and processes between work stations&lt;br /&gt;
*The motivation for building a new operating system came from three general trends in computer technology: networks, large memories, and multiprocessors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Team&#039;&#039;&#039;: Jamie, Hassan, Khaled&lt;br /&gt;
&lt;br /&gt;
Sprite had the following Design Features:&lt;br /&gt;
# Network Transparency&lt;br /&gt;
# Process Migration, file transfer between computers&lt;br /&gt;
#* User could initiate a process migration to an idle machine, and if the machine was no longer idle; due to it being used by another user, the system would take care of the process migration to another machine&lt;br /&gt;
# Handling Cache Consistency&lt;br /&gt;
#* Sequential file sharing ==&amp;gt; By using a version number for each file&lt;br /&gt;
#* Concurrent write sharing ==&amp;gt; Disable cache to clients, enable write-blocking and other methods&lt;br /&gt;
# Implemented a caching system that sped up performance&lt;br /&gt;
# Implemented a log structured file system&lt;br /&gt;
#* They realized that with increasing amounts of RAM in computers which can be used for caching, writes to the disk were the main bottleneck, not reads.&lt;br /&gt;
#* Log structured file-systems are optimized for writes, as changes to previous data are appended at the current position.&lt;br /&gt;
#* This allows for very fast, sequential writes.&lt;br /&gt;
#* Example: SSD (Solid-state disks)&lt;br /&gt;
&lt;br /&gt;
The main features to take away from the Sprite system is that it implemented a log structured file system, and implemented caching to increase performance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Sprite =&lt;br /&gt;
&lt;br /&gt;
Mohamed&lt;br /&gt;
* transparent remote access to filesystems&lt;br /&gt;
* portable to other operating systems and machine architectures&lt;br /&gt;
* can add new file systems dynamically the same way that device drivers are added&lt;br /&gt;
* most flexible method of remote file access available then.&lt;/div&gt;</summary>
		<author><name>Mohamedahmed</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20212</id>
		<title>DistOS 2015W Session 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20212"/>
		<updated>2015-04-28T05:12:36Z</updated>

		<summary type="html">&lt;p&gt;Mohamedahmed: /* Sprite */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Multics =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Sameer, Shivjot, Ambalica, Veena&lt;br /&gt;
&lt;br /&gt;
It came into being in the 1960s and it completely vanished in 2000s. It was started by Bell, General Electric and MIT but Bell backed out of the project in 1969.&lt;br /&gt;
Multics is a time sharing OS which provides multitasking and multiprogramming. It is not a distributed OS but it a centralized system which was written in machine-specific assembly.&lt;br /&gt;
&lt;br /&gt;
It provides the following features:&lt;br /&gt;
# Utility Computing&lt;br /&gt;
# Access Control Lists&lt;br /&gt;
# Single level storage&lt;br /&gt;
# Dynamic linking&lt;br /&gt;
#* Sharded libraries or files can be loaded and linked to Random Access Memory at run time&lt;br /&gt;
# Hot swapping&lt;br /&gt;
# Multiprocessing System&lt;br /&gt;
# Ring oriented Security&lt;br /&gt;
#* It provides number of levels of authorization within the computer system&lt;br /&gt;
#* Still present in some form today, inside both processors (like x86) and operating systems&lt;br /&gt;
&lt;br /&gt;
= Unix =&lt;br /&gt;
&lt;br /&gt;
* general purpose multi-user interactive operating system&lt;br /&gt;
* relatively inexpensive&lt;br /&gt;
* includes text editors, interpreters and compilers&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unix in its original conception is a small, minimal API system designed by two guys from Bell Labs. It was essentially an OS that was optimized for the needs of programmers but not much beyond that. The UNIX OS ran on one computer, and terminals ran from that one computer. Thus it is not a distributed operating system as it is centralized and implements time sharing. In fact, it didn&#039;t even have support for networking in the first version.&lt;br /&gt;
&lt;br /&gt;
The C language was created specifically for Unix, as the creators wanted to create a machine-agnostic language for the operating system.&lt;br /&gt;
&lt;br /&gt;
Most features from Unix are still available in present day Unix-based systems. For example, the shell, with its piping capabilities, is still used today in its original form.&lt;br /&gt;
&lt;br /&gt;
= NFS =&lt;br /&gt;
&lt;br /&gt;
NFS is a protocol for working with distributed file systems transparently using RPC. These connections are not secure. Sun wanted to encrypt these RPC connections but encryption would result in government regulations that Sun wanted to avoid in order to sell NFS over seas.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Sun wanted to secure their NFS with encryption, but at the time encryption was regulated like munitions in the United States. Exporting any product that had encryption was impossible, but Sun needed those sales abroad. To avoid these regulations, Sun decided to sell the insecure NFS version of the system.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Mert&lt;br /&gt;
&lt;br /&gt;
*Sun used &amp;quot;open protocol&amp;quot; approach to develop NFS&lt;br /&gt;
** Simply specified the exact message formats that clients and servers would use to communicate.&lt;br /&gt;
** Different groups could develop their own NFS servers and thus compete in an NFS marketplace.&lt;br /&gt;
** ex. Sun, NetApp, EMC, IBM, etc.&lt;br /&gt;
&lt;br /&gt;
*NFSv2, simple and fast server crash recovery&lt;br /&gt;
**Any minute that the server is down (or unavailable) makes all the clients unproductive.&lt;br /&gt;
**The protocol is designed to deliver in each protocol request all the information that is needed in order to complete the request.&lt;br /&gt;
**stateless approach: the server does not track anything about what clients are doing&lt;br /&gt;
**no fancy crash recovery is needed, the server just starts running again, and a client, at worst, might have to retry a request.&lt;br /&gt;
&lt;br /&gt;
= Locus =&lt;br /&gt;
&lt;br /&gt;
# Not scalable&lt;br /&gt;
#* The synchronization algorithms were so slow, they only managed to run it on five computers&lt;br /&gt;
#* Every computer stores a copy of every file&lt;br /&gt;
#* Also used CAS to manage files&lt;br /&gt;
# Not efficient with abstractions&lt;br /&gt;
#* Trying to distribute files and processes&lt;br /&gt;
#Allowed for process migration&lt;br /&gt;
#Transparency&lt;br /&gt;
#* It provided network transparency  to “disguise” its distributed context.&lt;br /&gt;
#Dynamic reconfiguration. (It adapts topology changes.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Locus has lots of similarities with the today&#039;s systems. It uses replication and partitioning which are employed in cloud and distributed systems.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Eric&lt;br /&gt;
&lt;br /&gt;
*LOCUS was capable of distribute file and processes among the nodes but not the computing power.&lt;br /&gt;
&lt;br /&gt;
* distributed operating system which supports transparent access to  data through a network wide filesystem&lt;br /&gt;
* upward compatible with Unix&lt;br /&gt;
* automatic replication of storage (backups), distributed process execution (cloud computing?)&lt;br /&gt;
&lt;br /&gt;
= Sprite =&lt;br /&gt;
&lt;br /&gt;
Mohamed&lt;br /&gt;
&lt;br /&gt;
* OS for networked uniprocessor and multiprocessor workstations with large physical memories&lt;br /&gt;
*shared memory and processes between work stations&lt;br /&gt;
*The motivation for building a new operating system came from three general trends in computer technology: networks, large memories, and multiprocessors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Team&#039;&#039;&#039;: Jamie, Hassan, Khaled&lt;br /&gt;
&lt;br /&gt;
Sprite had the following Design Features:&lt;br /&gt;
# Network Transparency&lt;br /&gt;
# Process Migration, file transfer between computers&lt;br /&gt;
#* User could initiate a process migration to an idle machine, and if the machine was no longer idle; due to it being used by another user, the system would take care of the process migration to another machine&lt;br /&gt;
# Handling Cache Consistency&lt;br /&gt;
#* Sequential file sharing ==&amp;gt; By using a version number for each file&lt;br /&gt;
#* Concurrent write sharing ==&amp;gt; Disable cache to clients, enable write-blocking and other methods&lt;br /&gt;
# Implemented a caching system that sped up performance&lt;br /&gt;
# Implemented a log structured file system&lt;br /&gt;
#* They realized that with increasing amounts of RAM in computers which can be used for caching, writes to the disk were the main bottleneck, not reads.&lt;br /&gt;
#* Log structured file-systems are optimized for writes, as changes to previous data are appended at the current position.&lt;br /&gt;
#* This allows for very fast, sequential writes.&lt;br /&gt;
#* Example: SSD (Solid-state disks)&lt;br /&gt;
&lt;br /&gt;
The main features to take away from the Sprite system is that it implemented a log structured file system, and implemented caching to increase performance.&lt;/div&gt;</summary>
		<author><name>Mohamedahmed</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20211</id>
		<title>DistOS 2015W Session 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20211"/>
		<updated>2015-04-28T04:31:28Z</updated>

		<summary type="html">&lt;p&gt;Mohamedahmed: /* Locus */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Multics =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Sameer, Shivjot, Ambalica, Veena&lt;br /&gt;
&lt;br /&gt;
It came into being in the 1960s and it completely vanished in 2000s. It was started by Bell, General Electric and MIT but Bell backed out of the project in 1969.&lt;br /&gt;
Multics is a time sharing OS which provides multitasking and multiprogramming. It is not a distributed OS but it a centralized system which was written in machine-specific assembly.&lt;br /&gt;
&lt;br /&gt;
It provides the following features:&lt;br /&gt;
# Utility Computing&lt;br /&gt;
# Access Control Lists&lt;br /&gt;
# Single level storage&lt;br /&gt;
# Dynamic linking&lt;br /&gt;
#* Sharded libraries or files can be loaded and linked to Random Access Memory at run time&lt;br /&gt;
# Hot swapping&lt;br /&gt;
# Multiprocessing System&lt;br /&gt;
# Ring oriented Security&lt;br /&gt;
#* It provides number of levels of authorization within the computer system&lt;br /&gt;
#* Still present in some form today, inside both processors (like x86) and operating systems&lt;br /&gt;
&lt;br /&gt;
= Unix =&lt;br /&gt;
&lt;br /&gt;
* general purpose multi-user interactive operating system&lt;br /&gt;
* relatively inexpensive&lt;br /&gt;
* includes text editors, interpreters and compilers&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unix in its original conception is a small, minimal API system designed by two guys from Bell Labs. It was essentially an OS that was optimized for the needs of programmers but not much beyond that. The UNIX OS ran on one computer, and terminals ran from that one computer. Thus it is not a distributed operating system as it is centralized and implements time sharing. In fact, it didn&#039;t even have support for networking in the first version.&lt;br /&gt;
&lt;br /&gt;
The C language was created specifically for Unix, as the creators wanted to create a machine-agnostic language for the operating system.&lt;br /&gt;
&lt;br /&gt;
Most features from Unix are still available in present day Unix-based systems. For example, the shell, with its piping capabilities, is still used today in its original form.&lt;br /&gt;
&lt;br /&gt;
= NFS =&lt;br /&gt;
&lt;br /&gt;
NFS is a protocol for working with distributed file systems transparently using RPC. These connections are not secure. Sun wanted to encrypt these RPC connections but encryption would result in government regulations that Sun wanted to avoid in order to sell NFS over seas.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Sun wanted to secure their NFS with encryption, but at the time encryption was regulated like munitions in the United States. Exporting any product that had encryption was impossible, but Sun needed those sales abroad. To avoid these regulations, Sun decided to sell the insecure NFS version of the system.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Mert&lt;br /&gt;
&lt;br /&gt;
*Sun used &amp;quot;open protocol&amp;quot; approach to develop NFS&lt;br /&gt;
** Simply specified the exact message formats that clients and servers would use to communicate.&lt;br /&gt;
** Different groups could develop their own NFS servers and thus compete in an NFS marketplace.&lt;br /&gt;
** ex. Sun, NetApp, EMC, IBM, etc.&lt;br /&gt;
&lt;br /&gt;
*NFSv2, simple and fast server crash recovery&lt;br /&gt;
**Any minute that the server is down (or unavailable) makes all the clients unproductive.&lt;br /&gt;
**The protocol is designed to deliver in each protocol request all the information that is needed in order to complete the request.&lt;br /&gt;
**stateless approach: the server does not track anything about what clients are doing&lt;br /&gt;
**no fancy crash recovery is needed, the server just starts running again, and a client, at worst, might have to retry a request.&lt;br /&gt;
&lt;br /&gt;
= Locus =&lt;br /&gt;
&lt;br /&gt;
# Not scalable&lt;br /&gt;
#* The synchronization algorithms were so slow, they only managed to run it on five computers&lt;br /&gt;
#* Every computer stores a copy of every file&lt;br /&gt;
#* Also used CAS to manage files&lt;br /&gt;
# Not efficient with abstractions&lt;br /&gt;
#* Trying to distribute files and processes&lt;br /&gt;
#Allowed for process migration&lt;br /&gt;
#Transparency&lt;br /&gt;
#* It provided network transparency  to “disguise” its distributed context.&lt;br /&gt;
#Dynamic reconfiguration. (It adapts topology changes.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Locus has lots of similarities with the today&#039;s systems. It uses replication and partitioning which are employed in cloud and distributed systems.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Eric&lt;br /&gt;
&lt;br /&gt;
*LOCUS was capable of distribute file and processes among the nodes but not the computing power.&lt;br /&gt;
&lt;br /&gt;
* distributed operating system which supports transparent access to  data through a network wide filesystem&lt;br /&gt;
* upward compatible with Unix&lt;br /&gt;
* automatic replication of storage (backups), distributed process execution (cloud computing?)&lt;br /&gt;
&lt;br /&gt;
= Sprite =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Team&#039;&#039;&#039;: Jamie, Hassan, Khaled&lt;br /&gt;
&lt;br /&gt;
Sprite had the following Design Features:&lt;br /&gt;
# Network Transparency&lt;br /&gt;
# Process Migration, file transfer between computers&lt;br /&gt;
#* User could initiate a process migration to an idle machine, and if the machine was no longer idle; due to it being used by another user, the system would take care of the process migration to another machine&lt;br /&gt;
# Handling Cache Consistency&lt;br /&gt;
#* Sequential file sharing ==&amp;gt; By using a version number for each file&lt;br /&gt;
#* Concurrent write sharing ==&amp;gt; Disable cache to clients, enable write-blocking and other methods&lt;br /&gt;
# Implemented a caching system that sped up performance&lt;br /&gt;
# Implemented a log structured file system&lt;br /&gt;
#* They realized that with increasing amounts of RAM in computers which can be used for caching, writes to the disk were the main bottleneck, not reads.&lt;br /&gt;
#* Log structured file-systems are optimized for writes, as changes to previous data are appended at the current position.&lt;br /&gt;
#* This allows for very fast, sequential writes.&lt;br /&gt;
#* Example: SSD (Solid-state disks)&lt;br /&gt;
&lt;br /&gt;
The main features to take away from the Sprite system is that it implemented a log structured file system, and implemented caching to increase performance.&lt;/div&gt;</summary>
		<author><name>Mohamedahmed</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20210</id>
		<title>DistOS 2015W Session 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_3&amp;diff=20210"/>
		<updated>2015-04-28T04:30:08Z</updated>

		<summary type="html">&lt;p&gt;Mohamedahmed: /* Unix */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Multics =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Sameer, Shivjot, Ambalica, Veena&lt;br /&gt;
&lt;br /&gt;
It came into being in the 1960s and it completely vanished in 2000s. It was started by Bell, General Electric and MIT but Bell backed out of the project in 1969.&lt;br /&gt;
Multics is a time sharing OS which provides multitasking and multiprogramming. It is not a distributed OS but it a centralized system which was written in machine-specific assembly.&lt;br /&gt;
&lt;br /&gt;
It provides the following features:&lt;br /&gt;
# Utility Computing&lt;br /&gt;
# Access Control Lists&lt;br /&gt;
# Single level storage&lt;br /&gt;
# Dynamic linking&lt;br /&gt;
#* Sharded libraries or files can be loaded and linked to Random Access Memory at run time&lt;br /&gt;
# Hot swapping&lt;br /&gt;
# Multiprocessing System&lt;br /&gt;
# Ring oriented Security&lt;br /&gt;
#* It provides number of levels of authorization within the computer system&lt;br /&gt;
#* Still present in some form today, inside both processors (like x86) and operating systems&lt;br /&gt;
&lt;br /&gt;
= Unix =&lt;br /&gt;
&lt;br /&gt;
* general purpose multi-user interactive operating system&lt;br /&gt;
* relatively inexpensive&lt;br /&gt;
* includes text editors, interpreters and compilers&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unix in its original conception is a small, minimal API system designed by two guys from Bell Labs. It was essentially an OS that was optimized for the needs of programmers but not much beyond that. The UNIX OS ran on one computer, and terminals ran from that one computer. Thus it is not a distributed operating system as it is centralized and implements time sharing. In fact, it didn&#039;t even have support for networking in the first version.&lt;br /&gt;
&lt;br /&gt;
The C language was created specifically for Unix, as the creators wanted to create a machine-agnostic language for the operating system.&lt;br /&gt;
&lt;br /&gt;
Most features from Unix are still available in present day Unix-based systems. For example, the shell, with its piping capabilities, is still used today in its original form.&lt;br /&gt;
&lt;br /&gt;
= NFS =&lt;br /&gt;
&lt;br /&gt;
NFS is a protocol for working with distributed file systems transparently using RPC. These connections are not secure. Sun wanted to encrypt these RPC connections but encryption would result in government regulations that Sun wanted to avoid in order to sell NFS over seas.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Sun wanted to secure their NFS with encryption, but at the time encryption was regulated like munitions in the United States. Exporting any product that had encryption was impossible, but Sun needed those sales abroad. To avoid these regulations, Sun decided to sell the insecure NFS version of the system.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Mert&lt;br /&gt;
&lt;br /&gt;
*Sun used &amp;quot;open protocol&amp;quot; approach to develop NFS&lt;br /&gt;
** Simply specified the exact message formats that clients and servers would use to communicate.&lt;br /&gt;
** Different groups could develop their own NFS servers and thus compete in an NFS marketplace.&lt;br /&gt;
** ex. Sun, NetApp, EMC, IBM, etc.&lt;br /&gt;
&lt;br /&gt;
*NFSv2, simple and fast server crash recovery&lt;br /&gt;
**Any minute that the server is down (or unavailable) makes all the clients unproductive.&lt;br /&gt;
**The protocol is designed to deliver in each protocol request all the information that is needed in order to complete the request.&lt;br /&gt;
**stateless approach: the server does not track anything about what clients are doing&lt;br /&gt;
**no fancy crash recovery is needed, the server just starts running again, and a client, at worst, might have to retry a request.&lt;br /&gt;
&lt;br /&gt;
= Locus =&lt;br /&gt;
&lt;br /&gt;
# Not scalable&lt;br /&gt;
#* The synchronization algorithms were so slow, they only managed to run it on five computers&lt;br /&gt;
#* Every computer stores a copy of every file&lt;br /&gt;
#* Also used CAS to manage files&lt;br /&gt;
# Not efficient with abstractions&lt;br /&gt;
#* Trying to distribute files and processes&lt;br /&gt;
#Allowed for process migration&lt;br /&gt;
#Transparency&lt;br /&gt;
#* It provided network transparency  to “disguise” its distributed context.&lt;br /&gt;
#Dynamic reconfiguration. (It adapts topology changes.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Locus has lots of similarities with the today&#039;s systems. It uses replication and partitioning which are employed in cloud and distributed systems.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Eric&lt;br /&gt;
&lt;br /&gt;
*LOCUS was capable of distribute file and processes among the nodes but not the computing power.&lt;br /&gt;
&lt;br /&gt;
= Sprite =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Team&#039;&#039;&#039;: Jamie, Hassan, Khaled&lt;br /&gt;
&lt;br /&gt;
Sprite had the following Design Features:&lt;br /&gt;
# Network Transparency&lt;br /&gt;
# Process Migration, file transfer between computers&lt;br /&gt;
#* User could initiate a process migration to an idle machine, and if the machine was no longer idle; due to it being used by another user, the system would take care of the process migration to another machine&lt;br /&gt;
# Handling Cache Consistency&lt;br /&gt;
#* Sequential file sharing ==&amp;gt; By using a version number for each file&lt;br /&gt;
#* Concurrent write sharing ==&amp;gt; Disable cache to clients, enable write-blocking and other methods&lt;br /&gt;
# Implemented a caching system that sped up performance&lt;br /&gt;
# Implemented a log structured file system&lt;br /&gt;
#* They realized that with increasing amounts of RAM in computers which can be used for caching, writes to the disk were the main bottleneck, not reads.&lt;br /&gt;
#* Log structured file-systems are optimized for writes, as changes to previous data are appended at the current position.&lt;br /&gt;
#* This allows for very fast, sequential writes.&lt;br /&gt;
#* Example: SSD (Solid-state disks)&lt;br /&gt;
&lt;br /&gt;
The main features to take away from the Sprite system is that it implemented a log structured file system, and implemented caching to increase performance.&lt;/div&gt;</summary>
		<author><name>Mohamedahmed</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20013</id>
		<title>DistOS 2015W Session 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20013"/>
		<updated>2015-03-16T19:28:01Z</updated>

		<summary type="html">&lt;p&gt;Mohamedahmed: /* Comet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Feel free to tweak the questions!&lt;br /&gt;
&lt;br /&gt;
==Kademlia==&lt;br /&gt;
Members: Kirill, Deep, Jason, Hassan&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Why are DHTs relevant to distributed OSs?&#039;&#039;&#039;&lt;br /&gt;
** Using many system, having repetition&lt;br /&gt;
** DHT to distributed content over  multiple nodes&lt;br /&gt;
** Decentralized therefore peer-to-peer&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;How is content divided?&#039;&#039;&#039;&lt;br /&gt;
** File hashes&lt;br /&gt;
** Node ID to locate the value&lt;br /&gt;
** 160 bit key space, binary tree for partition and searching down the tree&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;How is the network traversed?&#039;&#039;&#039;&lt;br /&gt;
** Highest prefix and increase the number of digit it used to get close to the target nodes&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;What trust assumptions does the system make?&#039;&#039;&#039;&lt;br /&gt;
** DHT by itself is insecure&lt;br /&gt;
** The academic and practitioner communities have realized that all current DHT designs suffer from a security weakness, known as the Sybil attack&lt;br /&gt;
** K-buckets&lt;br /&gt;
*** Binary tree with each node having k-buckets as leaf&lt;br /&gt;
*** Any given k-node is very unlikely to fail within an hour of each other&lt;br /&gt;
*** New nodes are only inserted when there is room in the bucket or the oldest node doesn&#039;t respond&lt;br /&gt;
** Uses UDP therefore packets are often lost&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Performance constraints?&#039;&#039;&#039;&lt;br /&gt;
** Binary tree traversing, therefore traversing is maximum O(log n)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Kademlia for this purpose?&#039;&#039;&#039;&lt;br /&gt;
** DNS&lt;br /&gt;
** Any kind of meta-data service&lt;br /&gt;
&lt;br /&gt;
==Comet==&lt;br /&gt;
Members: Mohamed Ahmed, Apoorv Sangal, Ambalica Sharma&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
DHT is an infrastructure than enables many clients to share information, and scale to handle node arrival, departure and failure. DHT&#039;s serve many of the design goals of disbtributed operating systems. The paper states that &amp;quot;DHTs are increasingly used to support a variety of distributed applications, such as file-sharing, distributed resource tracking, end-system multicast, publish-subscribe systems, distributed search engines&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
One of the three main components of the comet system is a routing substrate which&lt;br /&gt;
implements the value/node mapping. This allows a client to find the node htat stores&lt;br /&gt;
a specific data item. Since Comet uses a DHT implementation, routing occurs by applying&lt;br /&gt;
a hash function to the key to compute node ID&#039;s that store the associated value.&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
Assumes clients re untrusted autonomous nodes. &lt;br /&gt;
&lt;br /&gt;
A client node running Comet should be protected from the execution of handlers &lt;br /&gt;
e.g. an executing handler cannot corrupt the node or use unlimited resources. &lt;br /&gt;
Handlers should not be able to mount messaging attacks on other nodes.&lt;br /&gt;
&lt;br /&gt;
Users downloading Comet must trust it and have guarantees about its behavior. For this reason, Comet enforces four important restrictions:&lt;br /&gt;
1. Limited knowledge: an ASO is not aware of other objects&lt;br /&gt;
or resources stored on the same node and has no&lt;br /&gt;
direct way to learn about them.&lt;br /&gt;
2. Limited access: an object handler can manipulate only its own value and cannot modify the values of other objects on its storage node.&lt;br /&gt;
3. Limited communication: an active storage object cannot&lt;br /&gt;
send arbitrary messages over the network.&lt;br /&gt;
4. Limited resource consumption: an ASO’s resource usage is strictly bounded, e.g., the system limits the amount of computation and memory it can consume.&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Comet for this purpose?&lt;br /&gt;
&lt;br /&gt;
==Tapestry==&lt;br /&gt;
Members: Ashley, Dany, Alexis&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
&lt;br /&gt;
Because they provide a way to distribute information over large networks (distributed key/value store).&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
&lt;br /&gt;
Uses consistent hashing (SHA-1), upon node creation (join) creates optimal routing table.&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
You look at your neighbours, you see which neighbour is closest to your destination, and recurse.&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
&lt;br /&gt;
It assumes the system is entirely trustworthy from adversaries. While network failures may happen and nodes may go down, no node will explicitly try to mess with the network.&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
O(log n) access times to any given node. Best effort publishing/unpublishing via decentralized object location routing.&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Tapestry for this purpose?&lt;/div&gt;</summary>
		<author><name>Mohamedahmed</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20012</id>
		<title>DistOS 2015W Session 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20012"/>
		<updated>2015-03-16T19:27:16Z</updated>

		<summary type="html">&lt;p&gt;Mohamedahmed: /* Comet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Feel free to tweak the questions!&lt;br /&gt;
&lt;br /&gt;
==Kademlia==&lt;br /&gt;
Members: Kirill, Deep, Jason, Hassan&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Why are DHTs relevant to distributed OSs?&#039;&#039;&#039;&lt;br /&gt;
** Using many system, having repetition&lt;br /&gt;
** DHT to distributed content over  multiple nodes&lt;br /&gt;
** Decentralized therefore peer-to-peer&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;How is content divided?&#039;&#039;&#039;&lt;br /&gt;
** File hashes&lt;br /&gt;
** Node ID to locate the value&lt;br /&gt;
** 160 bit key space, binary tree for partition and searching down the tree&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;How is the network traversed?&#039;&#039;&#039;&lt;br /&gt;
** Highest prefix and increase the number of digit it used to get close to the target nodes&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;What trust assumptions does the system make?&#039;&#039;&#039;&lt;br /&gt;
** DHT by itself is insecure&lt;br /&gt;
** The academic and practitioner communities have realized that all current DHT designs suffer from a security weakness, known as the Sybil attack&lt;br /&gt;
** K-buckets&lt;br /&gt;
*** Binary tree with each node having k-buckets as leaf&lt;br /&gt;
*** Any given k-node is very unlikely to fail within an hour of each other&lt;br /&gt;
*** New nodes are only inserted when there is room in the bucket or the oldest node doesn&#039;t respond&lt;br /&gt;
** Uses UDP therefore packets are often lost&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Performance constraints?&#039;&#039;&#039;&lt;br /&gt;
** Binary tree traversing, therefore traversing is maximum O(log n)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Kademlia for this purpose?&#039;&#039;&#039;&lt;br /&gt;
** DNS&lt;br /&gt;
** Any kind of meta-data service&lt;br /&gt;
&lt;br /&gt;
==Comet==&lt;br /&gt;
Members: Mohamed Ahmed, Apoorv Sangal, Ambalica Sharma&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
DHT is an infrastructure than enables many clients to share information, and scale to handle node arrival, departure and failure. DHT&#039;s serve many of the design goals of disbtributed operating systems. The paper states that &amp;quot;DHTs are increasingly used to support a variety of distributed applications, such as file-sharing, distributed resource tracking, end-system multicast, publish-subscribe systems, distributed search engines&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
One of the three main components of the comet system is a routing substrate which&lt;br /&gt;
implements the value/node mapping. This allows a client to find the node htat stores&lt;br /&gt;
a specific data item. Since Comet uses a DHT implementation, routing occurs by applying&lt;br /&gt;
a hash function to the key to compute node ID&#039;s that store the associated value.&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
Assumes clients re untrusted autonomous nodes. &lt;br /&gt;
&lt;br /&gt;
A client node running Comet should be protected from the execution of handlers &lt;br /&gt;
e.g. an executing handler cannot corrupt the node or use unlimited resources. &lt;br /&gt;
Handlers should not be able to mount messaging attacks on other nodes.&lt;br /&gt;
&lt;br /&gt;
Users downloading Comet must trust it and have guarantees about its behavior. For this reason, Comet enforces four important restrictions:&lt;br /&gt;
1. Limited knowledge: an ASO is not aware of other objects&lt;br /&gt;
or resources stored on the same node and has no&lt;br /&gt;
direct way to learn about them.&lt;br /&gt;
2. Limited access: an object handler can manipulate only its own value and cannot modify the values of other objects on its storage node.&lt;br /&gt;
3. Limited communication: an active storage object cannot&lt;br /&gt;
send arbitrary messages over the network.&lt;br /&gt;
4. Limited resource consumption: an ASO’s resource usage is strictly bounded, e.g., the system limits the amount of computation and memory it can consume.&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Comet for this purpose?&lt;br /&gt;
Video streaming like Netflix, possible&lt;br /&gt;
&lt;br /&gt;
==Tapestry==&lt;br /&gt;
Members: Ashley, Dany, Alexis&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
&lt;br /&gt;
Because they provide a way to distribute information over large networks (distributed key/value store).&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
&lt;br /&gt;
Uses consistent hashing (SHA-1), upon node creation (join) creates optimal routing table.&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
You look at your neighbours, you see which neighbour is closest to your destination, and recurse.&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
&lt;br /&gt;
It assumes the system is entirely trustworthy from adversaries. While network failures may happen and nodes may go down, no node will explicitly try to mess with the network.&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
O(log n) access times to any given node. Best effort publishing/unpublishing via decentralized object location routing.&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Tapestry for this purpose?&lt;/div&gt;</summary>
		<author><name>Mohamedahmed</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20011</id>
		<title>DistOS 2015W Session 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20011"/>
		<updated>2015-03-16T19:18:18Z</updated>

		<summary type="html">&lt;p&gt;Mohamedahmed: /* Comet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Feel free to tweak the questions!&lt;br /&gt;
&lt;br /&gt;
==Kademlia==&lt;br /&gt;
Members: Kirill, Deep, Jason, Hassan&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Why are DHTs relevant to distributed OSs?&#039;&#039;&#039;&lt;br /&gt;
** Using many system, having repetition&lt;br /&gt;
** DHT to distributed content over  multiple nodes&lt;br /&gt;
** Decentralized therefore peer-to-peer&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;How is content divided?&#039;&#039;&#039;&lt;br /&gt;
** File hashes&lt;br /&gt;
** Node ID to locate the value&lt;br /&gt;
** 160 bit key space, binary tree for partition and searching down the tree&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;How is the network traversed?&#039;&#039;&#039;&lt;br /&gt;
** Highest prefix and increase the number of digit it used to get close to the target nodes&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;What trust assumptions does the system make?&#039;&#039;&#039;&lt;br /&gt;
** DHT by itself is insecure&lt;br /&gt;
** The academic and practitioner communities have realized that all current DHT designs suffer from a security weakness, known as the Sybil attack&lt;br /&gt;
** K-buckets&lt;br /&gt;
*** Binary tree with each node having k-buckets as leaf&lt;br /&gt;
*** Any given k-node is very unlikely to fail within an hour of each other&lt;br /&gt;
*** New nodes are only inserted when there is room in the bucket or the oldest node doesn&#039;t respond&lt;br /&gt;
** Uses UDP therefore packets are often lost&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Performance constraints?&#039;&#039;&#039;&lt;br /&gt;
** Binary tree traversing, therefore traversing is maximum O(log n)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Kademlia for this purpose?&#039;&#039;&#039;&lt;br /&gt;
** DNS&lt;br /&gt;
** Any kind of meta-data service&lt;br /&gt;
&lt;br /&gt;
==Comet==&lt;br /&gt;
Members: Mohamed Ahmed, Apoorv Sangal, Ambalica Sharma&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
DHT is an infrastructure than enables many clients to share information, and scale to handle node arrival, departure and failure. DHT&#039;s serve many of the design goals of disbtributed operating systems. The paper states that &amp;quot;DHTs are increasingly used to support a variety of distributed applications, such as file-sharing, distributed resource tracking, end-system multicast, publish-subscribe systems, distributed search engines&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
One of the three main components of the comet system is a routing substrate which&lt;br /&gt;
implements the value/node mapping. This allows a client to find the node htat stores&lt;br /&gt;
a specific data item. Since Comet uses a DHT implementation, routing occurs by applying&lt;br /&gt;
a hash function to the key to compute node ID&#039;s that store the associated value.&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
A client node running Comet should be protected from the execution of handlers &lt;br /&gt;
e.g. an executing handler cannot corrupt the node or use unlimited resources. &lt;br /&gt;
Handlers should not be able to mount messaging attacks on other nodes.&lt;br /&gt;
&lt;br /&gt;
Users downloading Comet must trust it and have guarantees about its behavior. For this reason, Comet enforces four important restrictions:&lt;br /&gt;
1. Limited knowledge: an ASO is not aware of other objects&lt;br /&gt;
or resources stored on the same node and has no&lt;br /&gt;
direct way to learn about them.&lt;br /&gt;
2. Limited access: an object handler can manipulate only its own value and cannot modify the values of other objects on its storage node.&lt;br /&gt;
3. Limited communication: an active storage object cannot&lt;br /&gt;
send arbitrary messages over the network.&lt;br /&gt;
4. Limited resource consumption: an ASO’s resource usage is strictly bounded, e.g., the system limits the amount of computation and memory it can consume.&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Comet for this purpose?&lt;br /&gt;
Video streaming like Netflix, possible&lt;br /&gt;
&lt;br /&gt;
==Tapestry==&lt;br /&gt;
Members: Ashley, Dany, Alexis&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
&lt;br /&gt;
Because they provide a way to distribute information over large networks (distributed key/value store).&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
&lt;br /&gt;
Uses consistent hashing (SHA-1), upon node creation (join) creates optimal routing table.&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
You look at your neighbours, you see which neighbour is closest to your destination, and recurse.&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
&lt;br /&gt;
It assumes the system is entirely trustworthy from adversaries. While network failures may happen and nodes may go down, no node will explicitly try to mess with the network.&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
O(log n) access times to any given node. Best effort publishing/unpublishing via decentralized object location routing.&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Tapestry for this purpose?&lt;/div&gt;</summary>
		<author><name>Mohamedahmed</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20008</id>
		<title>DistOS 2015W Session 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20008"/>
		<updated>2015-03-16T19:15:24Z</updated>

		<summary type="html">&lt;p&gt;Mohamedahmed: /* Comet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Feel free to tweak the questions!&lt;br /&gt;
&lt;br /&gt;
==Kademlia==&lt;br /&gt;
Members: Kirill, Deep, Jason, Hassan&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Why are DHTs relevant to distributed OSs?&#039;&#039;&#039;&lt;br /&gt;
** Using many system, having repetition&lt;br /&gt;
** DHT to distributed content over  multiple nodes&lt;br /&gt;
** Decentralized therefore peer-to-peer&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;How is content divided?&#039;&#039;&#039;&lt;br /&gt;
** File hashes&lt;br /&gt;
** Node ID to locate the value&lt;br /&gt;
** 160 bit key space, binary tree for partition and searching down the tree&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;How is the network traversed?&#039;&#039;&#039;&lt;br /&gt;
** Highest prefix and increase the number of digit it used to get close to the target nodes&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;What trust assumptions does the system make?&#039;&#039;&#039;&lt;br /&gt;
** DHT by itself is insecure&lt;br /&gt;
** The academic and practitioner communities have realized that all current DHT designs suffer from a security weakness, known as the Sybil attack&lt;br /&gt;
** K-buckets&lt;br /&gt;
*** Binary tree with each node having k-buckets as leaf&lt;br /&gt;
*** Any given k-node is very unlikely to fail within an hour of each other&lt;br /&gt;
*** New nodes are only inserted when there is room in the bucket or the oldest node doesn&#039;t respond&lt;br /&gt;
** Uses UDP therefore packets are often lost&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Performance constraints?&#039;&#039;&#039;&lt;br /&gt;
** Binary tree traversing, therefore traversing is maximum O(log n)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Kademlia for this purpose?&#039;&#039;&#039;&lt;br /&gt;
** DNS&lt;br /&gt;
** Any kind of meta-data service&lt;br /&gt;
&lt;br /&gt;
==Comet==&lt;br /&gt;
Members: Mohamed Ahmed, Apoorv Sangal, Ambalica Sharma&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
DHT is an infrastructure than enables many clients to share information, and scale to handle node arrival, departure and failure. DHT&#039;s serve many of the design goals of disbtributed operating systems. The paper states that &amp;quot;DHTs are increasingly used to support a variety of distributed applications, such as file-sharing, distributed resource tracking, end-system multicast, publish-subscribe systems, distributed search engines&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
One of the three main components of the comet system is a routing substrate which&lt;br /&gt;
implements the value/node mapping. This allows a client to find the node htat stores&lt;br /&gt;
a specific data item. Since Comet uses a DHT implementation, routing occurs by applying&lt;br /&gt;
a hash function to the key to compute node ID&#039;s that store the associated value.&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
A client node running Comet should be protected from the execution of handlers &lt;br /&gt;
e.g. an executing handler cannot corrupt the node or use unlimited resources. &lt;br /&gt;
Handlers should not be able to mount messaging attacks on other nodes.&lt;br /&gt;
&lt;br /&gt;
Users downloading Comet must trust it and have guarantees about its behavior. For this reason, Comet enforces four important restrictions:&lt;br /&gt;
1. Limited knowledge: an ASO is not aware of other objects&lt;br /&gt;
or resources stored on the same node and has no&lt;br /&gt;
direct way to learn about them.&lt;br /&gt;
2. Limited access: an object handler can manipulate only its own value and cannot modify the values of other objects on its storage node.&lt;br /&gt;
3. Limited communication: an active storage object cannot&lt;br /&gt;
send arbitrary messages over the network.&lt;br /&gt;
4. Limited resource consumption: an ASO’s resource usage is strictly bounded, e.g., the system limits the amount of computation and memory it can consume.&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Kademlia for this purpose?&lt;br /&gt;
&lt;br /&gt;
==Tapestry==&lt;br /&gt;
Members:&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
&lt;br /&gt;
Because they provide a way to distribute information over large networks (distributed key/value store).&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
&lt;br /&gt;
Uses consistent hashing (SHA-1), upon node creation (join) creates optimal routing table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
You look at your neighbours, you see which neighbour is closest to your destination, and recurse.&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
&lt;br /&gt;
It assumes the system is entirely trustworthy from adversaries. While network failures may happen and nodes may go down, no node will explicitly try to mess with the network.&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
O(log n) access times to any given node. Best effort publishing/unpublishing via decentralized object location routing.&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Tapestry for this purpose?&lt;/div&gt;</summary>
		<author><name>Mohamedahmed</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20007</id>
		<title>DistOS 2015W Session 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20007"/>
		<updated>2015-03-16T19:14:59Z</updated>

		<summary type="html">&lt;p&gt;Mohamedahmed: /* Comet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Feel free to tweak the questions!&lt;br /&gt;
&lt;br /&gt;
==Kademlia==&lt;br /&gt;
Members: Kirill, Deep, Jason, Hassan&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Why are DHTs relevant to distributed OSs?&#039;&#039;&#039;&lt;br /&gt;
** Using many system, having repetition&lt;br /&gt;
** DHT to distributed content over  multiple nodes&lt;br /&gt;
** Decentralized therefore peer-to-peer&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;How is content divided?&#039;&#039;&#039;&lt;br /&gt;
** File hashes&lt;br /&gt;
** Node ID to locate the value&lt;br /&gt;
** 160 bit key space, binary tree for partition and searching down the tree&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;How is the network traversed?&#039;&#039;&#039;&lt;br /&gt;
** Highest prefix and increase the number of digit it used to get close to the target nodes&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;What trust assumptions does the system make?&#039;&#039;&#039;&lt;br /&gt;
** DHT by itself is insecure&lt;br /&gt;
** The academic and practitioner communities have realized that all current DHT designs suffer from a security weakness, known as the Sybil attack&lt;br /&gt;
** K-buckets&lt;br /&gt;
*** Binary tree with each node having k-buckets as leaf&lt;br /&gt;
*** Any given k-node is very unlikely to fail within an hour of each other&lt;br /&gt;
*** New nodes are only inserted when there is room in the bucket or the oldest node doesn&#039;t respond&lt;br /&gt;
** Uses UDP therefore packets are often lost&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Performance constraints?&#039;&#039;&#039;&lt;br /&gt;
** Binary tree traversing, therefore traversing is maximum O(log n)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Kademlia for this purpose?&#039;&#039;&#039;&lt;br /&gt;
** DNS&lt;br /&gt;
** Any kind of meta-data service&lt;br /&gt;
&lt;br /&gt;
==Comet==&lt;br /&gt;
Members: Mohamed Ahmed, Apoorv Sangal, Ambalica Sharma&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
DHT is an infrastructure than enables many clients to share information, and scale to handle node arrival, departure and failure. DHT&#039;s serve many of the design goals of disbtributed operating systems. The paper states that &amp;quot;DHTs are increasingly used to support a variety of distributed applications, such as file-sharing, distributed resource tracking, end-system multicast, publish-subscribe systems, distributed search engines&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
One of the three main components of the comet system is a routing substrate which&lt;br /&gt;
implements the value/node mapping. This allows a client to find the node htat stores&lt;br /&gt;
a specific data item. Since Comet uses a DHT implementation, routing occurs by applying&lt;br /&gt;
a hash function to the key to compute node ID&#039;s that store the associated value.&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
A client node running C0met should be protected from the execution of handlers &lt;br /&gt;
e.g. an executing handler cannot corrupt the node or use unlimited resources. &lt;br /&gt;
Handlers should not be able to mount messaging attacks on other nodes.&lt;br /&gt;
&lt;br /&gt;
Users downloading Comet must trust it and have guarantees about its behavior. For this reason, Comet enforces four important restrictions:&lt;br /&gt;
1. Limited knowledge: an ASO is not aware of other objects&lt;br /&gt;
or resources stored on the same node and has no&lt;br /&gt;
direct way to learn about them.&lt;br /&gt;
2. Limited access: an object handler can manipulate only its own value and cannot modify the values of other objects on its storage node.&lt;br /&gt;
3. Limited communication: an active storage object cannot&lt;br /&gt;
send arbitrary messages over the network.&lt;br /&gt;
4. Limited resource consumption: an ASO’s resource usage is strictly bounded, e.g., the system limits the amount of computation and memory it can consume.&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Kademlia for this purpose?&lt;br /&gt;
&lt;br /&gt;
==Tapestry==&lt;br /&gt;
Members:&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
&lt;br /&gt;
Because they provide a way to distribute information over large networks (distributed key/value store).&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
&lt;br /&gt;
Uses consistent hashing (SHA-1), upon node creation (join) creates optimal routing table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
You look at your neighbours, you see which neighbour is closest to your destination, and recurse.&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
&lt;br /&gt;
It assumes the system is entirely trustworthy from adversaries. While network failures may happen and nodes may go down, no node will explicitly try to mess with the network.&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
O(log n) access times to any given node. Best effort publishing/unpublishing via decentralized object location routing.&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Tapestry for this purpose?&lt;/div&gt;</summary>
		<author><name>Mohamedahmed</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20006</id>
		<title>DistOS 2015W Session 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20006"/>
		<updated>2015-03-16T19:14:12Z</updated>

		<summary type="html">&lt;p&gt;Mohamedahmed: /* Comet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Feel free to tweak the questions!&lt;br /&gt;
&lt;br /&gt;
==Kademlia==&lt;br /&gt;
Members: Kirill, Deep, Jason, Hassan&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Why are DHTs relevant to distributed OSs?&#039;&#039;&#039;&lt;br /&gt;
** Using many system, having repetition&lt;br /&gt;
** DHT to distributed content over  multiple nodes&lt;br /&gt;
** Decentralized therefore peer-to-peer&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;How is content divided?&#039;&#039;&#039;&lt;br /&gt;
** File hashes&lt;br /&gt;
** Node ID to locate the value&lt;br /&gt;
** 160 bit key space, binary tree for partition and searching down the tree&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;How is the network traversed?&#039;&#039;&#039;&lt;br /&gt;
** Highest prefix and increase the number of digit it used to get close to the target nodes&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;What trust assumptions does the system make?&#039;&#039;&#039;&lt;br /&gt;
** DHT by itself is insecure&lt;br /&gt;
** The academic and practitioner communities have realized that all current DHT designs suffer from a security weakness, known as the Sybil attack&lt;br /&gt;
** K-buckets&lt;br /&gt;
*** Binary tree with each node having k-buckets as leaf&lt;br /&gt;
*** Any given k-node is very unlikely to fail within an hour of each other&lt;br /&gt;
*** New nodes are only inserted when there is room in the bucket or the oldest node doesn&#039;t respond&lt;br /&gt;
** Uses UDP therefore packets are often lost&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Performance constraints?&#039;&#039;&#039;&lt;br /&gt;
** Binary tree traversing, therefore traversing is maximum O(log n)&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Kademlia for this purpose?&#039;&#039;&#039;&lt;br /&gt;
** DNS&lt;br /&gt;
** Any kind of meta-data service&lt;br /&gt;
&lt;br /&gt;
==Comet==&lt;br /&gt;
Members: Mohamed Ahmed, Apoorv Sangal, Ambalica Sharma&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
DHT is an infrastructure than enables many clients to share information, and scale to handle node arrival, departure and failure. DHT&#039;s serve many of the design goals of disbtributed operating systems. The paper states that &amp;quot;DHTs are increasingly used to support a variety of distributed applications, such as file-sharing, distributed resource tracking, end-system multicast, publish-subscribe systems, distributed search engines&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
One of the three main components of the comet system is a routing substrate which&lt;br /&gt;
implements the value/node mapping. This allows a client to find the node htat stores&lt;br /&gt;
a specific data item. Since Comet uses a DHT implementation, routing occurs by applying&lt;br /&gt;
a hash function to the key to compute node ID&#039;s that store the associated value.&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
A client node running COmet should be protected from the execution of handlers &lt;br /&gt;
e.g. an executing handler cannot corrupt the node or use unlimited resources. &lt;br /&gt;
Handlers should not be able to mount messaging attacks on other nodes.&lt;br /&gt;
&lt;br /&gt;
Users downloading Comet must trust it and have guarantees about its behavior. For this reason, Comet enforces four important restrictions:&lt;br /&gt;
1. Limited knowledge: an ASO is not aware of other objects&lt;br /&gt;
or resources stored on the same node and has no&lt;br /&gt;
direct way to learn about them.&lt;br /&gt;
2. Limited access: an object handler can manipulate only its own value and cannot modify the values of other objects on its storage node.&lt;br /&gt;
3. Limited communication: an active storage object cannot&lt;br /&gt;
send arbitrary messages over the network.&lt;br /&gt;
4. Limited resource consumption: an ASO’s resource usage is strictly bounded, e.g., the system limits the amount of computation and memory it can consume.&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Kademlia for this purpose?&lt;br /&gt;
&lt;br /&gt;
==Tapestry==&lt;br /&gt;
Members:&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
&lt;br /&gt;
Because they provide a way to distribute information over large networks (distributed key/value store).&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
&lt;br /&gt;
Uses consistent hashing (SHA-1), upon node creation (join) creates optimal routing table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
You look at your neighbours, you see which neighbour is closest to your destination, and recurse.&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
&lt;br /&gt;
It assumes the system is entirely trustworthy from adversaries. While network failures may happen and nodes may go down, no node will explicitly try to mess with the network.&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
O(log n) access times to any given node. Best effort publishing/unpublishing via decentralized object location routing.&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Tapestry for this purpose?&lt;/div&gt;</summary>
		<author><name>Mohamedahmed</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20004</id>
		<title>DistOS 2015W Session 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20004"/>
		<updated>2015-03-16T19:13:19Z</updated>

		<summary type="html">&lt;p&gt;Mohamedahmed: /* Comet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Feel free to tweak the questions!&lt;br /&gt;
&lt;br /&gt;
==Kademlia==&lt;br /&gt;
Members: Kirill, Deep, Jason, Hassan&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
** Using many system, having repetition&lt;br /&gt;
** DHT to distributed content over  multiple nodes&lt;br /&gt;
** Decentralized therefore peer-to-peer&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
** File hashes&lt;br /&gt;
** Node ID to locate the value&lt;br /&gt;
** 160 bit key space, binary tree for partition and searching down the tree&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
** Highest prefix and increase the number of digit it used to get close to the target nodes&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
** DHT by itself is insecure&lt;br /&gt;
** The academic and practitioner communities have realized that all current DHT designs suffer from a security weakness, known as the Sybil attack&lt;br /&gt;
** K-buckets: binary tree with each node having k-buckets as leaf. Any given k-node is very unlikely to fail within an hour of each other. New nodes are only inserted when there is room in the bucket or the oldest node doesn&#039;t respond&lt;br /&gt;
** Uses UDP therefore packets are often lost&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
** Binary tree traversing, therefore traversing is maximum O(log n)&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Kademlia for this purpose?&lt;br /&gt;
&lt;br /&gt;
==Comet==&lt;br /&gt;
Members: Mohamed Ahmed, Apoorv Sangal, Ambalica Sharma&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
DHT is an infrastructure than enables many clients to share information, and scale to handle node arrival, departure and failure. DHT&#039;s serve many of the design goals of disbtributed operating systems. The paper states that &amp;quot;DHTs are increasingly used to support a variety of distributed applications, such as file-sharing, distributed resource tracking, end-system multicast, publish-subscribe systems, distributed search engines&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One of the three main components of the comet system is a routing substrate which&lt;br /&gt;
implements the value/node mapping. This allows a client to find the node htat stores&lt;br /&gt;
a specific data item. Since Comet uses a DHT implementation, routing occurs by applying&lt;br /&gt;
a hash function to the key to compute node ID&#039;s that store the associated value.&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
&lt;br /&gt;
A client node running COmet should be protected from the execution of handlers &lt;br /&gt;
e.g. an executing handler cannot corrupt the node or use unlimited resources. &lt;br /&gt;
Handlers should not be able to mount messaging attacks on other nodes.&lt;br /&gt;
&lt;br /&gt;
Users downloading Comet must trust it and have guarantees about its behavior. For this reason, Comet enforces four important restrictions:&lt;br /&gt;
1. Limited knowledge: an ASO is not aware of other objects&lt;br /&gt;
or resources stored on the same node and has no&lt;br /&gt;
direct way to learn about them.&lt;br /&gt;
2. Limited access: an object handler can manipulate only its own value and cannot modify the values of other objects on its storage node.&lt;br /&gt;
3. Limited communication: an active storage object cannot&lt;br /&gt;
send arbitrary messages over the network.&lt;br /&gt;
4. Limited resource consumption: an ASO’s resource usage is strictly bounded, e.g., the system limits the amount of computation and memory it can consume.&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Kademlia for this purpose?&lt;br /&gt;
&lt;br /&gt;
==Tapestry==&lt;br /&gt;
Members:&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
&lt;br /&gt;
Because they provide a way to distribute information over large networks (distributed key/value store).&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
&lt;br /&gt;
Uses consistent hashing (SHA-1), upon node creation (join) creates optimal routing table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
You look at your neighbours, you see which neighbour is closest to your destination, and recurse.&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
&lt;br /&gt;
It assumes the system is entirely trustworthy from adversaries. While network failures may happen and nodes may go down, no node will explicitly try to mess with the network.&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
O(log n) access times to any given node. Best effort publishing/unpublishing via decentralized object location routing.&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Tapestry for this purpose?&lt;/div&gt;</summary>
		<author><name>Mohamedahmed</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20001</id>
		<title>DistOS 2015W Session 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20001"/>
		<updated>2015-03-16T19:10:35Z</updated>

		<summary type="html">&lt;p&gt;Mohamedahmed: /* Comet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Feel free to tweak the questions!&lt;br /&gt;
&lt;br /&gt;
==Kademlia==&lt;br /&gt;
Members: Kirill, Deep, Jason, Hassan&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Kademlia for this purpose?&lt;br /&gt;
&lt;br /&gt;
==Comet==&lt;br /&gt;
Members: Mohamed Ahmed, Apoorv Sangal, Ambalica Sharma&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
DHT is an infrastructure than enables many clients to share information, and scale to handle node arrival, departure and failure. DHT&#039;s serve many of the design goals of disbtributed operating systems. The paper states that &amp;quot;DHTs are increasingly used to support a variety of distributed applications, such as file-sharing, distributed resource tracking, end-system multicast, publish-subscribe systems, distributed search engines&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One of the three main components of the comet system is a routing substrate which&lt;br /&gt;
implements the value/node mapping. This allows a client to find the node htat stores&lt;br /&gt;
a specific data item. Since Comet uses a DHT implementation, routing occurs by applying&lt;br /&gt;
a hash function to the key to compute node ID&#039;s that store the associated value.&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
&lt;br /&gt;
A client node running COmet should be protected from the execution of handlers &lt;br /&gt;
e.g. an executing handler cannot corrupt the node or use unlimited resources. &lt;br /&gt;
Handlers should not be able to mount messaging attacks on other nodes&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Kademlia for this purpose?&lt;br /&gt;
&lt;br /&gt;
==Tapestry==&lt;br /&gt;
Members:&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
&lt;br /&gt;
Because they provide a way to distribute information over large networks (distributed key/value store).&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Tapestry for this purpose?&lt;/div&gt;</summary>
		<author><name>Mohamedahmed</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20000</id>
		<title>DistOS 2015W Session 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=20000"/>
		<updated>2015-03-16T19:09:48Z</updated>

		<summary type="html">&lt;p&gt;Mohamedahmed: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Feel free to tweak the questions!&lt;br /&gt;
&lt;br /&gt;
==Kademlia==&lt;br /&gt;
Members: Kirill, Deep, Jason, Hassan&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Kademlia for this purpose?&lt;br /&gt;
&lt;br /&gt;
==Comet==&lt;br /&gt;
Members: Mohamed Ahmed, Apoorv Sangal, Ambalica Sharma&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
DHT is an infrastructure than enables many clients to share information, and scale to handle node arrival, departure and failure. DHT&#039;s serve many of the design goals of disbtributed operating systems. The paper states that &amp;quot;DHTs are increasingly used to support a variety of distributed applications, such as file-sharing, distributed resource tracking, end-system multicast, publish-subscribe systems, distributed search engines&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One of the three main components of the comet system is a routing substrate which&lt;br /&gt;
implements the value/node mapping. This allows a client to find the node htat stores&lt;br /&gt;
a specific data item. Since Comet uses a DHT implementation, routing occurs by applying&lt;br /&gt;
a hash function to the key to compute node ID&#039;s that store the associated value.&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
&lt;br /&gt;
A client node running COmet should be protected from the execution of handlers &lt;br /&gt;
e.g. an executing handler cannot corrupt the node or use unlimited resources. &lt;br /&gt;
Handlers should not be able to mount messaging attacks on other nodes&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Kademlia for this purpose?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Tapestry==&lt;br /&gt;
Members:&lt;br /&gt;
&lt;br /&gt;
* Why are DHTs relevant to distributed OSs?&lt;br /&gt;
&lt;br /&gt;
Because they provide a way to distribute information over large networks (distributed key/value store).&lt;br /&gt;
&lt;br /&gt;
* How is content divided?&lt;br /&gt;
&lt;br /&gt;
* How is the network traversed?&lt;br /&gt;
&lt;br /&gt;
* What trust assumptions does the system make?&lt;br /&gt;
&lt;br /&gt;
* Performance constraints?&lt;br /&gt;
&lt;br /&gt;
* What non-DHT internet infrastructure would you replace with a DHT?  How suitable is Tapestry for this purpose?&lt;/div&gt;</summary>
		<author><name>Mohamedahmed</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=19996</id>
		<title>DistOS 2015W Session 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_10&amp;diff=19996"/>
		<updated>2015-03-16T18:45:43Z</updated>

		<summary type="html">&lt;p&gt;Mohamedahmed: Created page with &amp;quot;Group 2 - Mohamed Ahmed, Apoorv Sangal, Ambalica Sharma  == Comet ==&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Group 2 - Mohamed Ahmed, Apoorv Sangal, Ambalica Sharma&lt;br /&gt;
&lt;br /&gt;
== Comet ==&lt;/div&gt;</summary>
		<author><name>Mohamedahmed</name></author>
	</entry>
</feed>