<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Veenarose</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Veenarose"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Veenarose"/>
	<updated>2026-05-12T22:16:20Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=20191</id>
		<title>DistOS 2015W Session 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_5&amp;diff=20191"/>
		<updated>2015-04-12T18:41:54Z</updated>

		<summary type="html">&lt;p&gt;Veenarose: /* Google File System */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= The Clouds Distributed Operating System =&lt;br /&gt;
It is a distributed OS running on a set of computers that are interconnected by a group of network. It basically unifies different computers into a single component.&lt;br /&gt;
&lt;br /&gt;
The OS is based on 2 patterns:&lt;br /&gt;
* Message Based OS&lt;br /&gt;
* Object Based  OS&lt;br /&gt;
&lt;br /&gt;
== Object Thread Model ==&lt;br /&gt;
&lt;br /&gt;
The structure of this is based on object thread model. It has set of objects which are defined by the class. Objects respond to messages. Sending message to object causes object to execute the method and then reply back. &lt;br /&gt;
&lt;br /&gt;
The system has &#039;&#039;&#039;active objects&#039;&#039;&#039; and &#039;&#039;&#039;passive objects&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
# Active objects are the objects which have one or more processes associated with them and further they can communicate with the external environment. &lt;br /&gt;
# Passive objects are those that currently do not have an active thread executing in them.&lt;br /&gt;
&lt;br /&gt;
The content of the Clouds data is long lived. Since the memory is implemented as a single-level store, the data exists forever and can survive system crashes and shut downs.&lt;br /&gt;
&lt;br /&gt;
== Threads ==&lt;br /&gt;
&lt;br /&gt;
The threads are the logical path of execution that traverse objects and executes code in them. The Clouds thread is not bound to a single address space. Several threads can enter an object simultaneously and execute concurrently. The nature of the Clouds object prohibits a thread from accessing any data outside the current address space in which it is executing.&lt;br /&gt;
&lt;br /&gt;
== Interaction Between Objects and Threads ==&lt;br /&gt;
&lt;br /&gt;
# Inter object interfaces are procedural&lt;br /&gt;
# Invocations work across machine boundaries&lt;br /&gt;
# Objects in clouds unify concept of persistent storage and memory to create address space, thus making the programming simpler.&lt;br /&gt;
# Control flow achieved by threads invoking objects.&lt;br /&gt;
&lt;br /&gt;
== Clouds Environment ==&lt;br /&gt;
&lt;br /&gt;
# Integrates set of homogeneous machines into one seamless environment&lt;br /&gt;
# There are three logical categories of machines- Compute Server, User Workstation and Data server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Plan 9 =&lt;br /&gt;
&lt;br /&gt;
Plan 9 is a general purpose, multi-user and mobile computing environment physically distributed across machines. Development of the system began in the late 1980s. The system was built at Bell Labs - the birth place of Unix. The original Unix OS had no support for networking, and there were many attempts over the years by others to create distributed systems with Unix compatibility. Plan 9, however, is distributed systems done following the original Unix philosophy.&lt;br /&gt;
&lt;br /&gt;
The goals of this system were:&lt;br /&gt;
# To build a distributed system that can be centrally administered.&lt;br /&gt;
# To be cost effective using cheap, modern microcomputers. &lt;br /&gt;
&lt;br /&gt;
The distribution itself is transparent to most programs. This is made possible by two properties:&lt;br /&gt;
# A per process group namespace.&lt;br /&gt;
# Uniform access to most resources by representing them as a file.&lt;br /&gt;
&lt;br /&gt;
== Unix Compatibility ==&lt;br /&gt;
&lt;br /&gt;
The commands, libraries and system calls are similar to that of Unix and therefore a casual user cannot distinguish between these two. The problems in UNIX were too deep to fix but still the various ideas were brought along. The problems addressed badly by UNIX were improved. Old tools were dropped and others were polished and reused.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Similaritieis with the UNIX ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;shell&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Various C compilers&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Unique Features ==&lt;br /&gt;
&lt;br /&gt;
What actually distinguishes Plan 9 is its &#039;&#039;&#039;organization&#039;&#039;&#039;. Plan 9 is divided along the lines of service function.&lt;br /&gt;
* CPU services and terminals use same kernel.&lt;br /&gt;
* Users may choose to run programs locally or remotely on CPU servers.&lt;br /&gt;
* It lets the user choose whether they want a distributed or centralized system.&lt;br /&gt;
&lt;br /&gt;
The design of Plan 9 is based on 3 principles:&lt;br /&gt;
# Resources are named and accessed like files in hierarchical file system.&lt;br /&gt;
# Standard protocol 9P.&lt;br /&gt;
# Disjoint hierarchical provided by different services are joined together into single private hierarchical file name space.&lt;br /&gt;
&lt;br /&gt;
=== Virtual Namespaces ===&lt;br /&gt;
&lt;br /&gt;
In a virtual namespace, a user boots a terminal or connects to a CPU server and then a new process group is created. Processes in group can either add to or rearrange their name space using two system calls - mount and bind.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Mount&#039;&#039;&#039; is used to attach new file system to a point in name space.&lt;br /&gt;
* &#039;&#039;&#039;Bind&#039;&#039;&#039; is used to attach a kernel resident (existing, mounted) file system to name space and also arrange pieces of name space.&lt;br /&gt;
* There is also &#039;&#039;&#039;unbind&#039;&#039;&#039; which undoes the effects of the other two calls.&lt;br /&gt;
&lt;br /&gt;
Namespaces in Plan 9 are on a per-process basis. While everything had a way reference resources with a unique name, using mount and bind every process could build a custom namespace as they saw fit.&lt;br /&gt;
&lt;br /&gt;
Since most resources are in the form of files (and folders), the term &#039;&#039;namespace&#039;&#039; really only refers to the filesystem layout.&lt;br /&gt;
&lt;br /&gt;
=== Parallel Programming ===&lt;br /&gt;
Parallel programming was supported in two ways:&lt;br /&gt;
* Kernel provides simple process model and carefully designed system calls for synchronization.&lt;br /&gt;
* Programming language supports concurrent programming.&lt;br /&gt;
&lt;br /&gt;
== Legacy ==&lt;br /&gt;
&lt;br /&gt;
Even though Plan 9 is no longer developed, the good ideas from the system still exist today. For example, the &#039;&#039;/proc&#039;&#039; virtual filesystem which displays current process information in the form of files exists in modern Linux kernels.&lt;br /&gt;
&lt;br /&gt;
= Google File System =&lt;br /&gt;
&lt;br /&gt;
It is scalable, distributed file system for large, data intensive applications. It is crafted to Google&#039;s unique needs as a search engine company.&lt;br /&gt;
&lt;br /&gt;
Unlike most filesystems, GFS must be implemented by individual applications and is not part of the kernel. While this introduces some technical overhead, it gives the system more freedom to implement or not implement certain non-standard features.&lt;br /&gt;
&lt;br /&gt;
Link to an explanation on how GFS works&lt;br /&gt;
[http://computer.howstuffworks.com/internet/basics/google-file-system1.htm]&lt;br /&gt;
&lt;br /&gt;
== Architecture ==&lt;br /&gt;
&lt;br /&gt;
The architecture of the Google file system consists of a single master, multiple chunk-servers and multiple clients. Chunk servers are used to store the data in uniformly sized chunks. Each chunk is identified by globally unique 64 bit handle assigned by master at the time of creation. The chunks are split into 64KB blocks, each with its own hashsum for data integrity checks. The chunks are replicated between servers, 3 by default. The master maintains all the file system metadata which includes the namespace and chunk location.&lt;br /&gt;
&lt;br /&gt;
Each of the chunks is 64 MB large (contrast this to the typical filesystem sectors of 512 or 4096 bytes) as the system is meant to hold enormous amount of data - namely the internet. The large chunk size is also important for the scalability of the system - the larger the chunk size, the less metadata the master server has to store for any given amount of data. With the current size, the master server is able to store the entirety of the metadata in memory, increasing performance by a significant margin.&lt;br /&gt;
&lt;br /&gt;
== Operation ==&lt;br /&gt;
&lt;br /&gt;
Master and Chunk server communication consists of&lt;br /&gt;
# checking whether there any chunk-server is down&lt;br /&gt;
# checking if any file is corrupted&lt;br /&gt;
# deleting stale chunks&lt;br /&gt;
&lt;br /&gt;
When a client wants to do some operations on the chunks&lt;br /&gt;
# it first asks the master server for the list of servers that store the parts of a file it wants to access&lt;br /&gt;
# it receives a list of chunk servers, with multiple servers for each chunk&lt;br /&gt;
# it finally communicates with the the chunk servers to perform the operation&lt;br /&gt;
&lt;br /&gt;
The system is geared towards appends and sequential reads. This is why the master server responds with multiple server addresses for each chunk - the client can then request a small piece from each server, increasing the data throughput linearly with the number of servers. Writes, in general, are in the form of a special &#039;&#039;append&#039;&#039; system call. When appending, there is no chance that two clients will want to write to the same location at the same time. This helps avoid any potential synchronization issues. If there are multiple appends to the same file at the same time, the chunk servers are free to order them as they wish (chunks on each server are not guaranteed to be byte-for-byte identical). Changes may also be applied multiple times. These issues are left for the application using GFS to resolve themselves. While a problem in the general sense, this is good enough for Google&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
== Redundancy ==&lt;br /&gt;
&lt;br /&gt;
GFS is built with failure in mind. The system expects that at any time, there is some server or disk that is malfunctioning. The system deals with the failures as follows.&lt;br /&gt;
&lt;br /&gt;
=== Chunk Servers ===&lt;br /&gt;
&lt;br /&gt;
By default, chunks are replicated to three servers. This exact number depends on the application in question doing the write. When a chunk server finds that some of its data is corrupt, it grabs the data from other servers to repair itself{{Citation needed}}.&lt;br /&gt;
&lt;br /&gt;
=== Master Server ===&lt;br /&gt;
&lt;br /&gt;
For efficiency, there is only a single live master server at a time. While not making the system completely distributed, it avoid many synchronization problems and suits Google&#039;s needs. At any point in time, there are multiple read-only master servers that copy metadata from the currently live master. Should it go down, they will serve any read operations from the clients, until one of the hot spares is promoted to being the new live master server.&lt;/div&gt;</summary>
		<author><name>Veenarose</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=20190</id>
		<title>DistOS 2015W Session 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=20190"/>
		<updated>2015-04-12T18:37:04Z</updated>

		<summary type="html">&lt;p&gt;Veenarose: /* OCEAN STORE */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
==Midterm==&lt;br /&gt;
 &lt;br /&gt;
The midterm from last year [http://homeostasis.scs.carleton.ca/~soma/distos/2015w/comp4000-2014w-midterm.pdf is now available].&lt;br /&gt;
&lt;br /&gt;
==FARSITE==&lt;br /&gt;
&lt;br /&gt;
Farsite is a secure, scalable file system that logically functions as a centralized file server but is physically distributed among a set of untrusted computers. Farsite provides file availability and reliability through randomized replicated storage; it ensures the secrecy of file contents with cryptographic techniques; it maintains the integrity of file and directory data with a Byzantine-fault-tolerant protocol; it is designed to be scalable by using a distributed hint mechanism and delegation certificates for pathname translations; and it achieves good performance by locally caching file data, lazily propagating file updates, and varying the duration and granularity of content leases. We report on the design of Farsite and the lessons we have learned by implementing much of that design.&lt;br /&gt;
&lt;br /&gt;
[http://research.microsoft.com/apps/pubs/default.aspx?id=67917]&lt;br /&gt;
&lt;br /&gt;
==OCEAN STORE==&lt;br /&gt;
   Link to a mini review on the OceanStore Paper...&lt;br /&gt;
&lt;br /&gt;
[http://www.cse.lehigh.edu/~brian/course/advanced-networking/reviews/manura-oceanstore.html]&lt;br /&gt;
&lt;br /&gt;
==Group 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Kirill, Jamie, Alexis, Veena, Khaled, Hassan&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! FARSITE&lt;br /&gt;
! OceanStore&lt;br /&gt;
|-&lt;br /&gt;
! Fault Tolerance&lt;br /&gt;
| Used Byzantine Fault Tolerance Algorithm - Did not manage well&lt;br /&gt;
| Used Byzantine Fault Tolerance Algorithm - Did not manage well&lt;br /&gt;
|-&lt;br /&gt;
! Cryptography&lt;br /&gt;
| Trusted Certificates&lt;br /&gt;
| A strong cryptographic algorithm on read-only operations&lt;br /&gt;
|-&lt;br /&gt;
! Implementation&lt;br /&gt;
| Did not mention what programming they used, but it was based on Windows. They did not implement the file system&lt;br /&gt;
| Implemented in Java&lt;br /&gt;
|-&lt;br /&gt;
! Scalability&lt;br /&gt;
| Scalable to a University or large corporations, maximum 10&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;&lt;br /&gt;
| Worldwide scalability, maximum 10&amp;lt;sup&amp;gt;10&amp;lt;/sup&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! File Usage&lt;br /&gt;
| Was designed for general purpose files&lt;br /&gt;
| Was designed for small file sizes&lt;br /&gt;
|-&lt;br /&gt;
! Scope&lt;br /&gt;
| All clients sharing the available resources&lt;br /&gt;
| Transient centralized service&lt;br /&gt;
|-&lt;br /&gt;
! Object Model&lt;br /&gt;
| Didn&#039;t use the object model&lt;br /&gt;
| Used the object model&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Group 2==&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Apoorv, Ambalica, Ashley, Eric, Mert, Shivjot&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot; class=&amp;quot;wikitable&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;&#039;&#039;&#039;Farsite&#039;&#039;&#039;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;&#039;&#039;&#039;OceanStore&#039;&#039;&#039;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Implemented Content Leases&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Update Model handled data consistency, no Leases&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Single tier, peer to peer model&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Two tier, server client model&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Scope of ten to the five&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Global Scope (ten to the ten)&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Cryptographic public, private key security&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Read and write privileges&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Randomized data replication&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Nomadic Data concept&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Group 3== &lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: DANY, MOE, DEEP, SAMEER, TROY&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
== FARSITE ==&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Security&#039;&#039;&#039; &lt;br /&gt;
•	Cascading certificates system through directory hierarchy &lt;br /&gt;
•	 Keys &lt;br /&gt;
•	Three types of certificates &lt;br /&gt;
•	CFS required to authorized certificate&lt;br /&gt;
•       Because directory groups only modify their shared state via a Byzantine-fault-tolerant protocol, we trust the group not to make &lt;br /&gt;
        an incorrect update to directory metadata. This metadata includes an access control list (ACL) of public keys of all users&lt;br /&gt;
        who are authorized writers to that directory and to files in it&lt;br /&gt;
•       Both file content and user-sensitive metadata (meaning file and directory names) are encrypted for privacy.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;System Architecture&#039;&#039;&#039; &lt;br /&gt;
•	Client Monitor, directory group, file host&lt;br /&gt;
•	When space runs out in directory group, delegate’s ownership to sub tree to other delegate group. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
== OCEANSTORE ==&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Security&#039;&#039;&#039; &lt;br /&gt;
•	GUID and ACLs used for write, encryption used for reads.&lt;br /&gt;
•       To prevent unauthorized reads, it encrypts&lt;br /&gt;
        all data in the system that is not completely public and distributes the encryption key to those users with read permission&lt;/div&gt;</summary>
		<author><name>Veenarose</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=20189</id>
		<title>DistOS 2015W Session 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=20189"/>
		<updated>2015-04-12T18:36:39Z</updated>

		<summary type="html">&lt;p&gt;Veenarose: /* OCEAN STORE */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
==Midterm==&lt;br /&gt;
 &lt;br /&gt;
The midterm from last year [http://homeostasis.scs.carleton.ca/~soma/distos/2015w/comp4000-2014w-midterm.pdf is now available].&lt;br /&gt;
&lt;br /&gt;
==FARSITE==&lt;br /&gt;
&lt;br /&gt;
Farsite is a secure, scalable file system that logically functions as a centralized file server but is physically distributed among a set of untrusted computers. Farsite provides file availability and reliability through randomized replicated storage; it ensures the secrecy of file contents with cryptographic techniques; it maintains the integrity of file and directory data with a Byzantine-fault-tolerant protocol; it is designed to be scalable by using a distributed hint mechanism and delegation certificates for pathname translations; and it achieves good performance by locally caching file data, lazily propagating file updates, and varying the duration and granularity of content leases. We report on the design of Farsite and the lessons we have learned by implementing much of that design.&lt;br /&gt;
&lt;br /&gt;
[http://research.microsoft.com/apps/pubs/default.aspx?id=67917]&lt;br /&gt;
&lt;br /&gt;
==OCEAN STORE==&lt;br /&gt;
   Linc to a mini review on the OceanStore Paper...&lt;br /&gt;
&lt;br /&gt;
[http://www.cse.lehigh.edu/~brian/course/advanced-networking/reviews/manura-oceanstore.html]&lt;br /&gt;
&lt;br /&gt;
==Group 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Kirill, Jamie, Alexis, Veena, Khaled, Hassan&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! FARSITE&lt;br /&gt;
! OceanStore&lt;br /&gt;
|-&lt;br /&gt;
! Fault Tolerance&lt;br /&gt;
| Used Byzantine Fault Tolerance Algorithm - Did not manage well&lt;br /&gt;
| Used Byzantine Fault Tolerance Algorithm - Did not manage well&lt;br /&gt;
|-&lt;br /&gt;
! Cryptography&lt;br /&gt;
| Trusted Certificates&lt;br /&gt;
| A strong cryptographic algorithm on read-only operations&lt;br /&gt;
|-&lt;br /&gt;
! Implementation&lt;br /&gt;
| Did not mention what programming they used, but it was based on Windows. They did not implement the file system&lt;br /&gt;
| Implemented in Java&lt;br /&gt;
|-&lt;br /&gt;
! Scalability&lt;br /&gt;
| Scalable to a University or large corporations, maximum 10&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;&lt;br /&gt;
| Worldwide scalability, maximum 10&amp;lt;sup&amp;gt;10&amp;lt;/sup&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! File Usage&lt;br /&gt;
| Was designed for general purpose files&lt;br /&gt;
| Was designed for small file sizes&lt;br /&gt;
|-&lt;br /&gt;
! Scope&lt;br /&gt;
| All clients sharing the available resources&lt;br /&gt;
| Transient centralized service&lt;br /&gt;
|-&lt;br /&gt;
! Object Model&lt;br /&gt;
| Didn&#039;t use the object model&lt;br /&gt;
| Used the object model&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Group 2==&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Apoorv, Ambalica, Ashley, Eric, Mert, Shivjot&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot; class=&amp;quot;wikitable&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;&#039;&#039;&#039;Farsite&#039;&#039;&#039;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;&#039;&#039;&#039;OceanStore&#039;&#039;&#039;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Implemented Content Leases&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Update Model handled data consistency, no Leases&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Single tier, peer to peer model&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Two tier, server client model&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Scope of ten to the five&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Global Scope (ten to the ten)&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Cryptographic public, private key security&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Read and write privileges&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Randomized data replication&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Nomadic Data concept&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Group 3== &lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: DANY, MOE, DEEP, SAMEER, TROY&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
== FARSITE ==&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Security&#039;&#039;&#039; &lt;br /&gt;
•	Cascading certificates system through directory hierarchy &lt;br /&gt;
•	 Keys &lt;br /&gt;
•	Three types of certificates &lt;br /&gt;
•	CFS required to authorized certificate&lt;br /&gt;
•       Because directory groups only modify their shared state via a Byzantine-fault-tolerant protocol, we trust the group not to make &lt;br /&gt;
        an incorrect update to directory metadata. This metadata includes an access control list (ACL) of public keys of all users&lt;br /&gt;
        who are authorized writers to that directory and to files in it&lt;br /&gt;
•       Both file content and user-sensitive metadata (meaning file and directory names) are encrypted for privacy.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;System Architecture&#039;&#039;&#039; &lt;br /&gt;
•	Client Monitor, directory group, file host&lt;br /&gt;
•	When space runs out in directory group, delegate’s ownership to sub tree to other delegate group. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
== OCEANSTORE ==&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Security&#039;&#039;&#039; &lt;br /&gt;
•	GUID and ACLs used for write, encryption used for reads.&lt;br /&gt;
•       To prevent unauthorized reads, it encrypts&lt;br /&gt;
        all data in the system that is not completely public and distributes the encryption key to those users with read permission&lt;/div&gt;</summary>
		<author><name>Veenarose</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=20188</id>
		<title>DistOS 2015W Session 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=20188"/>
		<updated>2015-04-12T18:35:12Z</updated>

		<summary type="html">&lt;p&gt;Veenarose: /* FARSITE */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
==Midterm==&lt;br /&gt;
 &lt;br /&gt;
The midterm from last year [http://homeostasis.scs.carleton.ca/~soma/distos/2015w/comp4000-2014w-midterm.pdf is now available].&lt;br /&gt;
&lt;br /&gt;
==FARSITE==&lt;br /&gt;
&lt;br /&gt;
Farsite is a secure, scalable file system that logically functions as a centralized file server but is physically distributed among a set of untrusted computers. Farsite provides file availability and reliability through randomized replicated storage; it ensures the secrecy of file contents with cryptographic techniques; it maintains the integrity of file and directory data with a Byzantine-fault-tolerant protocol; it is designed to be scalable by using a distributed hint mechanism and delegation certificates for pathname translations; and it achieves good performance by locally caching file data, lazily propagating file updates, and varying the duration and granularity of content leases. We report on the design of Farsite and the lessons we have learned by implementing much of that design.&lt;br /&gt;
&lt;br /&gt;
[http://research.microsoft.com/apps/pubs/default.aspx?id=67917]&lt;br /&gt;
&lt;br /&gt;
==OCEAN STORE==&lt;br /&gt;
   “OceanStore: An Architecture for Global-Scale Persistent Storage” proposes an architecture for a global, durable, and highly available persistent storage network composed of untrusted servers.  Data can migrate and replicate to where it is most needed, thereby having similar benefits to caching.  For security, all stored data in encrypted.  A type of routing mechanism is needed on top of IP to locate OceanStore data, and this is achieved by two methods: an attenuated Bloom filter (i.e. a vector of Bloom filters per node, where each element locates objects at a given graph distance) and, if the previous method fails, a highly redundant version of the Plaxton scheme. Various error-correcting schemes are also used.  The network supports a type of introspection, in which the network can automatically tune itself to network conditions.  In addition to providing local access to global data, the system is a guard against natural disasters and denial-of-service (DOS) attacks.&lt;br /&gt;
&lt;br /&gt;
[http://www.cse.lehigh.edu/~brian/course/advanced-networking/reviews/manura-oceanstore.html]&lt;br /&gt;
&lt;br /&gt;
==Group 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Kirill, Jamie, Alexis, Veena, Khaled, Hassan&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! FARSITE&lt;br /&gt;
! OceanStore&lt;br /&gt;
|-&lt;br /&gt;
! Fault Tolerance&lt;br /&gt;
| Used Byzantine Fault Tolerance Algorithm - Did not manage well&lt;br /&gt;
| Used Byzantine Fault Tolerance Algorithm - Did not manage well&lt;br /&gt;
|-&lt;br /&gt;
! Cryptography&lt;br /&gt;
| Trusted Certificates&lt;br /&gt;
| A strong cryptographic algorithm on read-only operations&lt;br /&gt;
|-&lt;br /&gt;
! Implementation&lt;br /&gt;
| Did not mention what programming they used, but it was based on Windows. They did not implement the file system&lt;br /&gt;
| Implemented in Java&lt;br /&gt;
|-&lt;br /&gt;
! Scalability&lt;br /&gt;
| Scalable to a University or large corporations, maximum 10&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;&lt;br /&gt;
| Worldwide scalability, maximum 10&amp;lt;sup&amp;gt;10&amp;lt;/sup&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! File Usage&lt;br /&gt;
| Was designed for general purpose files&lt;br /&gt;
| Was designed for small file sizes&lt;br /&gt;
|-&lt;br /&gt;
! Scope&lt;br /&gt;
| All clients sharing the available resources&lt;br /&gt;
| Transient centralized service&lt;br /&gt;
|-&lt;br /&gt;
! Object Model&lt;br /&gt;
| Didn&#039;t use the object model&lt;br /&gt;
| Used the object model&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Group 2==&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Apoorv, Ambalica, Ashley, Eric, Mert, Shivjot&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot; class=&amp;quot;wikitable&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;&#039;&#039;&#039;Farsite&#039;&#039;&#039;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;&#039;&#039;&#039;OceanStore&#039;&#039;&#039;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Implemented Content Leases&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Update Model handled data consistency, no Leases&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Single tier, peer to peer model&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Two tier, server client model&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Scope of ten to the five&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Global Scope (ten to the ten)&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Cryptographic public, private key security&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Read and write privileges&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Randomized data replication&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Nomadic Data concept&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Group 3== &lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: DANY, MOE, DEEP, SAMEER, TROY&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
== FARSITE ==&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Security&#039;&#039;&#039; &lt;br /&gt;
•	Cascading certificates system through directory hierarchy &lt;br /&gt;
•	 Keys &lt;br /&gt;
•	Three types of certificates &lt;br /&gt;
•	CFS required to authorized certificate&lt;br /&gt;
•       Because directory groups only modify their shared state via a Byzantine-fault-tolerant protocol, we trust the group not to make &lt;br /&gt;
        an incorrect update to directory metadata. This metadata includes an access control list (ACL) of public keys of all users&lt;br /&gt;
        who are authorized writers to that directory and to files in it&lt;br /&gt;
•       Both file content and user-sensitive metadata (meaning file and directory names) are encrypted for privacy.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;System Architecture&#039;&#039;&#039; &lt;br /&gt;
•	Client Monitor, directory group, file host&lt;br /&gt;
•	When space runs out in directory group, delegate’s ownership to sub tree to other delegate group. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
== OCEANSTORE ==&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Security&#039;&#039;&#039; &lt;br /&gt;
•	GUID and ACLs used for write, encryption used for reads.&lt;br /&gt;
•       To prevent unauthorized reads, it encrypts&lt;br /&gt;
        all data in the system that is not completely public and distributes the encryption key to those users with read permission&lt;/div&gt;</summary>
		<author><name>Veenarose</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=20187</id>
		<title>DistOS 2015W Session 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_6&amp;diff=20187"/>
		<updated>2015-04-12T18:32:27Z</updated>

		<summary type="html">&lt;p&gt;Veenarose: /* Midterm */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
==Midterm==&lt;br /&gt;
 &lt;br /&gt;
The midterm from last year [http://homeostasis.scs.carleton.ca/~soma/distos/2015w/comp4000-2014w-midterm.pdf is now available].&lt;br /&gt;
&lt;br /&gt;
==FARSITE==&lt;br /&gt;
&lt;br /&gt;
Farsite is a secure, scalable file system that logically functions as a centralized file server but is physically distributed among a set of untrusted computers. Farsite provides file availability and reliability through randomized replicated storage; it ensures the secrecy of file contents with cryptographic techniques; it maintains the integrity of file and directory data with a Byzantine-fault-tolerant protocol; it is designed to be scalable by using a distributed hint mechanism and delegation certificates for pathname translations; and it achieves good performance by locally caching file data, lazily propagating file updates, and varying the duration and granularity of content leases. We report on the design of Farsite and the lessons we have learned by implementing much of that design.&lt;br /&gt;
&lt;br /&gt;
[http://research.microsoft.com/apps/pubs/default.aspx?id=67917]&lt;br /&gt;
&lt;br /&gt;
==Group 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Team:&#039;&#039;&#039; Kirill, Jamie, Alexis, Veena, Khaled, Hassan&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! FARSITE&lt;br /&gt;
! OceanStore&lt;br /&gt;
|-&lt;br /&gt;
! Fault Tolerance&lt;br /&gt;
| Used Byzantine Fault Tolerance Algorithm - Did not manage well&lt;br /&gt;
| Used Byzantine Fault Tolerance Algorithm - Did not manage well&lt;br /&gt;
|-&lt;br /&gt;
! Cryptography&lt;br /&gt;
| Trusted Certificates&lt;br /&gt;
| A strong cryptographic algorithm on read-only operations&lt;br /&gt;
|-&lt;br /&gt;
! Implementation&lt;br /&gt;
| Did not mention what programming they used, but it was based on Windows. They did not implement the file system&lt;br /&gt;
| Implemented in Java&lt;br /&gt;
|-&lt;br /&gt;
! Scalability&lt;br /&gt;
| Scalable to a University or large corporations, maximum 10&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;&lt;br /&gt;
| Worldwide scalability, maximum 10&amp;lt;sup&amp;gt;10&amp;lt;/sup&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! File Usage&lt;br /&gt;
| Was designed for general purpose files&lt;br /&gt;
| Was designed for small file sizes&lt;br /&gt;
|-&lt;br /&gt;
! Scope&lt;br /&gt;
| All clients sharing the available resources&lt;br /&gt;
| Transient centralized service&lt;br /&gt;
|-&lt;br /&gt;
! Object Model&lt;br /&gt;
| Didn&#039;t use the object model&lt;br /&gt;
| Used the object model&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Group 2==&lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: Apoorv, Ambalica, Ashley, Eric, Mert, Shivjot&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot; class=&amp;quot;wikitable&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;&#039;&#039;&#039;Farsite&#039;&#039;&#039;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;&#039;&#039;&#039;OceanStore&#039;&#039;&#039;&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Implemented Content Leases&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Update Model handled data consistency, no Leases&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Single tier, peer to peer model&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Two tier, server client model&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Scope of ten to the five&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Global Scope (ten to the ten)&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Cryptographic public, private key security&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Read and write privileges&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;tr&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Randomized data replication&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;td&amp;gt;&amp;lt;p&amp;gt;Nomadic Data concept&amp;lt;/p&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Group 3== &lt;br /&gt;
&#039;&#039;&#039;Team&#039;&#039;&#039;: DANY, MOE, DEEP, SAMEER, TROY&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
== FARSITE ==&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Security&#039;&#039;&#039; &lt;br /&gt;
•	Cascading certificates system through directory hierarchy &lt;br /&gt;
•	 Keys &lt;br /&gt;
•	Three types of certificates &lt;br /&gt;
•	CFS required to authorized certificate&lt;br /&gt;
•       Because directory groups only modify their shared state via a Byzantine-fault-tolerant protocol, we trust the group not to make &lt;br /&gt;
        an incorrect update to directory metadata. This metadata includes an access control list (ACL) of public keys of all users&lt;br /&gt;
        who are authorized writers to that directory and to files in it&lt;br /&gt;
•       Both file content and user-sensitive metadata (meaning file and directory names) are encrypted for privacy.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;System Architecture&#039;&#039;&#039; &lt;br /&gt;
•	Client Monitor, directory group, file host&lt;br /&gt;
•	When space runs out in directory group, delegate’s ownership to sub tree to other delegate group. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
== OCEANSTORE ==&lt;br /&gt;
&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Security&#039;&#039;&#039; &lt;br /&gt;
•	GUID and ACLs used for write, encryption used for reads.&lt;br /&gt;
•       To prevent unauthorized reads, it encrypts&lt;br /&gt;
        all data in the system that is not completely public and distributes the encryption key to those users with read permission&lt;/div&gt;</summary>
		<author><name>Veenarose</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=20186</id>
		<title>DistOS 2015W Session 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=20186"/>
		<updated>2015-04-12T18:27:47Z</updated>

		<summary type="html">&lt;p&gt;Veenarose: /* Chubby */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Ceph =&lt;br /&gt;
Unlike GFS that was talked about previously, this is a general purpose distributed file system. It follows the same general model of distribution as GFS and Amoeba.&lt;br /&gt;
&lt;br /&gt;
HOW IT WORKS&lt;br /&gt;
Ceph file system runs on top of the same object storage system that provides object storage and block device interfaces. The Ceph metadata server cluster provides a service that maps the directories and filenames of the file system to objects stored within RADOS clusters. The metadata server cluster can expand or contract, and it can rebalance the file system dynamically to distribute data evenly among cluster hosts. This ensures high performance and prevents heavy loads on specific hosts within the cluster.&lt;br /&gt;
&lt;br /&gt;
BENEFITS&lt;br /&gt;
The Ceph file system provides numerous benefits:&lt;br /&gt;
It provides stronger data safety for mission-critical applications.&lt;br /&gt;
It provides virtually unlimited storage to file systems.&lt;br /&gt;
Applications that use file systems can use Ceph FS with POSIX semantics. No integration or customization required!&lt;br /&gt;
Ceph automatically balances the file system to deliver maximum performance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Advantages of using File systems ==&lt;br /&gt;
*  It supports heterogeneous operating systems including all flavors of the unix operating system as well as Linux and windows&lt;br /&gt;
* Multiple client machines can access a single resource simultaneously.&lt;br /&gt;
enables sharing common application binaries and read only information instead of putting them on each single machine. This results in reduced overall disk storage cost and administration overhead.&lt;br /&gt;
*Gives access to uniform data to groups of users.&lt;br /&gt;
*Useful when many users exists on many systems with each user&#039;s home directory located on every single machine. Network file systems allows you to all users home directories on a single machine under /home&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Components ==&lt;br /&gt;
* Client&lt;br /&gt;
&lt;br /&gt;
* Cluster of Object Storage Devices (OSD)&lt;br /&gt;
** It basically stores data and metadata and clients communicate directly with it to perform IO operations&lt;br /&gt;
** Data is stored in objects (variable size chunks)&lt;br /&gt;
&lt;br /&gt;
* Meta-data Server (MDS)&lt;br /&gt;
** It is used to manage the file and directories. Clients interact with it to perform metadata operations like open, rename. It manages the capabilities of a client.&lt;br /&gt;
** Clients &#039;&#039;&#039;do not&#039;&#039;&#039; need to access MDSs to find where data is stored, improving scalability (more on that below)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
* Decoupled data and metadata&lt;br /&gt;
&lt;br /&gt;
* Dynamic Distributed Metadata Management&lt;br /&gt;
&lt;br /&gt;
** It distributes the metadata among multiple metadata servers using dynamic sub-tree partitioning, meaning folders that get used more often get their meta-data replicated to more servers, spreading the load. This happens completely automatically&lt;br /&gt;
&lt;br /&gt;
* Object based storage&lt;br /&gt;
** Uses cluster of OSDs to form a Reliable Autonomic Distributed Object-Store (RADOS) for Ceph failure detection and recovery&lt;br /&gt;
&lt;br /&gt;
* CRUSH (Controlled, Replicated, Under Scalable, Hashing)&lt;br /&gt;
** The hashing algorithm used to calculate the location of object instead of looking for them&lt;br /&gt;
** This significantly reduces the load on the MDSs because each client has enough information to independently determine where things should be located.&lt;br /&gt;
** Responsible for automatically moving data when ODSs are added or removed (can be simplified as &#039;&#039;location = CRUSH(filename) % num_servers&#039;&#039;)&lt;br /&gt;
** The CRUSH paper on Ceph’s website can be [http://ceph.com/papers/weil-crush-sc06.pdf viewed here]&lt;br /&gt;
&lt;br /&gt;
* RADOS (Reliable Autonomic Distributed Object-Store) is the object store for Ceph&lt;br /&gt;
&lt;br /&gt;
= Chubby =&lt;br /&gt;
It is a coarse grained lock service, made and used internally by Google, that serves multiple clients with small number of servers (chubby cell).&lt;br /&gt;
&lt;br /&gt;
Chubby was designed to be a lock service, a distributed system that clients could connect to and share access to small files. The servers providing the system are partitioned into a variety of cells and access for a particular file is managed through one elected master node in one cell. This master makes all decisions and informs the rest of the cell nodes of that decision. If the master fails, the other nodes elect a new master. The problem of asynchronous consensus is solved through the use of timeouts as a failure detector. To avoid the scaling problem of a single bottleneck, the number of cells can be increased with the cost of making some cells smaller.&lt;br /&gt;
&lt;br /&gt;
== System Components ==&lt;br /&gt;
* Chubby Cell&lt;br /&gt;
** Handles the actual locks&lt;br /&gt;
** Mainly consists of 5 servers known as replicas&lt;br /&gt;
** Consensus protocol ([https://en.wikipedia.org/wiki/Paxos_(computer_science) Paxos]) is used to elect the master from replicas&lt;br /&gt;
&lt;br /&gt;
* Client&lt;br /&gt;
** Used by programs to request and use locks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Features ==&lt;br /&gt;
* Implemented as a semi POSIX-compliant file-system with a 256KB file limit&lt;br /&gt;
** Permissions are for files only, not folders, thus breaking some compatibility&lt;br /&gt;
** Trivial to use by programs; just use the standard &#039;&#039;fopen()&#039;&#039; family of calls&lt;br /&gt;
&lt;br /&gt;
* Uses consensus algorithm (paxos) among a set of servers to agree on who is the master that is in charge of the metadata&lt;br /&gt;
&lt;br /&gt;
* Meant for locks that last hours or days, not seconds (thus, &amp;quot;coarse grained&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
* A master server can handle tens of thousands of simultaneous connections&lt;br /&gt;
** This can be further improved by using caching servers as most of the traffic are keep-alive messages&lt;br /&gt;
&lt;br /&gt;
* Used by GFS for electing master server&lt;br /&gt;
&lt;br /&gt;
* Also used by Google as a nameserver&lt;br /&gt;
&lt;br /&gt;
== Issues ==&lt;br /&gt;
* Due to the use of Paxos (the only algorithm for this problem), the chubby cell is limited to only 5 servers. While this limits fault-tolerance, in practice this is more than enough.&lt;br /&gt;
* Since it has a file-system client interface, many programmers tend to abuse the system and need education (even inside Google)&lt;br /&gt;
* Clients need to constantly ping Chubby to verify that they still exist. This is to ensure that a server disappearing while holding a lock does not indefinitely hold that lock.&lt;br /&gt;
* Clients must also consequently re-verify that they hold a lock that they think they hold because Chubby may have timed them out.&lt;/div&gt;</summary>
		<author><name>Veenarose</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=20184</id>
		<title>DistOS 2015W Session 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=20184"/>
		<updated>2015-04-11T19:25:31Z</updated>

		<summary type="html">&lt;p&gt;Veenarose: /* Ceph */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Ceph =&lt;br /&gt;
Unlike GFS that was talked about previously, this is a general purpose distributed file system. It follows the same general model of distribution as GFS and Amoeba.&lt;br /&gt;
&lt;br /&gt;
HOW IT WORKS&lt;br /&gt;
Ceph file system runs on top of the same object storage system that provides object storage and block device interfaces. The Ceph metadata server cluster provides a service that maps the directories and filenames of the file system to objects stored within RADOS clusters. The metadata server cluster can expand or contract, and it can rebalance the file system dynamically to distribute data evenly among cluster hosts. This ensures high performance and prevents heavy loads on specific hosts within the cluster.&lt;br /&gt;
&lt;br /&gt;
BENEFITS&lt;br /&gt;
The Ceph file system provides numerous benefits:&lt;br /&gt;
It provides stronger data safety for mission-critical applications.&lt;br /&gt;
It provides virtually unlimited storage to file systems.&lt;br /&gt;
Applications that use file systems can use Ceph FS with POSIX semantics. No integration or customization required!&lt;br /&gt;
Ceph automatically balances the file system to deliver maximum performance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Advantages of using File systems ==&lt;br /&gt;
*  It supports heterogeneous operating systems including all flavors of the unix operating system as well as Linux and windows&lt;br /&gt;
* Multiple client machines can access a single resource simultaneously.&lt;br /&gt;
enables sharing common application binaries and read only information instead of putting them on each single machine. This results in reduced overall disk storage cost and administration overhead.&lt;br /&gt;
*Gives access to uniform data to groups of users.&lt;br /&gt;
*Useful when many users exists on many systems with each user&#039;s home directory located on every single machine. Network file systems allows you to all users home directories on a single machine under /home&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Components ==&lt;br /&gt;
* Client&lt;br /&gt;
&lt;br /&gt;
* Cluster of Object Storage Devices (OSD)&lt;br /&gt;
** It basically stores data and metadata and clients communicate directly with it to perform IO operations&lt;br /&gt;
** Data is stored in objects (variable size chunks)&lt;br /&gt;
&lt;br /&gt;
* Meta-data Server (MDS)&lt;br /&gt;
** It is used to manage the file and directories. Clients interact with it to perform metadata operations like open, rename. It manages the capabilities of a client.&lt;br /&gt;
** Clients &#039;&#039;&#039;do not&#039;&#039;&#039; need to access MDSs to find where data is stored, improving scalability (more on that below)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
* Decoupled data and metadata&lt;br /&gt;
&lt;br /&gt;
* Dynamic Distributed Metadata Management&lt;br /&gt;
&lt;br /&gt;
** It distributes the metadata among multiple metadata servers using dynamic sub-tree partitioning, meaning folders that get used more often get their meta-data replicated to more servers, spreading the load. This happens completely automatically&lt;br /&gt;
&lt;br /&gt;
* Object based storage&lt;br /&gt;
** Uses cluster of OSDs to form a Reliable Autonomic Distributed Object-Store (RADOS) for Ceph failure detection and recovery&lt;br /&gt;
&lt;br /&gt;
* CRUSH (Controlled, Replicated, Under Scalable, Hashing)&lt;br /&gt;
** The hashing algorithm used to calculate the location of object instead of looking for them&lt;br /&gt;
** This significantly reduces the load on the MDSs because each client has enough information to independently determine where things should be located.&lt;br /&gt;
** Responsible for automatically moving data when ODSs are added or removed (can be simplified as &#039;&#039;location = CRUSH(filename) % num_servers&#039;&#039;)&lt;br /&gt;
** The CRUSH paper on Ceph’s website can be [http://ceph.com/papers/weil-crush-sc06.pdf viewed here]&lt;br /&gt;
&lt;br /&gt;
* RADOS (Reliable Autonomic Distributed Object-Store) is the object store for Ceph&lt;br /&gt;
&lt;br /&gt;
= Chubby =&lt;br /&gt;
It is a coarse grained lock service, made and used internally by Google, that serves multiple clients with small number of servers (chubby cell).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== System Components ==&lt;br /&gt;
* Chubby Cell&lt;br /&gt;
** Handles the actual locks&lt;br /&gt;
** Mainly consists of 5 servers known as replicas&lt;br /&gt;
** Consensus protocol ([https://en.wikipedia.org/wiki/Paxos_(computer_science) Paxos]) is used to elect the master from replicas&lt;br /&gt;
&lt;br /&gt;
* Client&lt;br /&gt;
** Used by programs to request and use locks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Features ==&lt;br /&gt;
* Implemented as a semi POSIX-compliant file-system with a 256KB file limit&lt;br /&gt;
** Permissions are for files only, not folders, thus breaking some compatibility&lt;br /&gt;
** Trivial to use by programs; just use the standard &#039;&#039;fopen()&#039;&#039; family of calls&lt;br /&gt;
&lt;br /&gt;
* Uses consensus algorithm (paxos) among a set of servers to agree on who is the master that is in charge of the metadata&lt;br /&gt;
&lt;br /&gt;
* Meant for locks that last hours or days, not seconds (thus, &amp;quot;coarse grained&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
* A master server can handle tens of thousands of simultaneous connections&lt;br /&gt;
** This can be further improved by using caching servers as most of the traffic are keep-alive messages&lt;br /&gt;
&lt;br /&gt;
* Used by GFS for electing master server&lt;br /&gt;
&lt;br /&gt;
* Also used by Google as a nameserver&lt;br /&gt;
&lt;br /&gt;
== Issues ==&lt;br /&gt;
* Due to the use of Paxos (the only algorithm for this problem), the chubby cell is limited to only 5 servers. While this limits fault-tolerance, in practice this is more than enough.&lt;br /&gt;
* Since it has a file-system client interface, many programmers tend to abuse the system and need education (even inside Google)&lt;br /&gt;
* Clients need to constantly ping Chubby to verify that they still exist. This is to ensure that a server disappearing while holding a lock does not indefinitely hold that lock.&lt;br /&gt;
* Clients must also consequently re-verify that they hold a lock that they think they hold because Chubby may have timed them out.&lt;/div&gt;</summary>
		<author><name>Veenarose</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=20183</id>
		<title>DistOS 2015W Session 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2015W_Session_7&amp;diff=20183"/>
		<updated>2015-04-11T19:23:48Z</updated>

		<summary type="html">&lt;p&gt;Veenarose: /* Ceph */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Ceph =&lt;br /&gt;
Unlike GFS that was talked about previously, this is a general purpose distributed file system. It follows the same general model of distribution as GFS and Amoeba.&lt;br /&gt;
&lt;br /&gt;
HOW IT WORKS&lt;br /&gt;
Ceph file system runs on top of the same object storage system that provides object storage and block device interfaces. The Ceph metadata server cluster provides a service that maps the directories and filenames of the file system to objects stored within RADOS clusters. The metadata server cluster can expand or contract, and it can rebalance the file system dynamically to distribute data evenly among cluster hosts. This ensures high performance and prevents heavy loads on specific hosts within the cluster.&lt;br /&gt;
&lt;br /&gt;
BENEFITS&lt;br /&gt;
The Ceph file system provides numerous benefits:&lt;br /&gt;
It provides stronger data safety for mission-critical applications.&lt;br /&gt;
It provides virtually unlimited storage to file systems.&lt;br /&gt;
Applications that use file systems can use Ceph FS with POSIX semantics. No integration or customization required!&lt;br /&gt;
Ceph automatically balances the file system to deliver maximum performance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Components ==&lt;br /&gt;
* Client&lt;br /&gt;
&lt;br /&gt;
* Cluster of Object Storage Devices (OSD)&lt;br /&gt;
** It basically stores data and metadata and clients communicate directly with it to perform IO operations&lt;br /&gt;
** Data is stored in objects (variable size chunks)&lt;br /&gt;
&lt;br /&gt;
* Meta-data Server (MDS)&lt;br /&gt;
** It is used to manage the file and directories. Clients interact with it to perform metadata operations like open, rename. It manages the capabilities of a client.&lt;br /&gt;
** Clients &#039;&#039;&#039;do not&#039;&#039;&#039; need to access MDSs to find where data is stored, improving scalability (more on that below)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
* Decoupled data and metadata&lt;br /&gt;
&lt;br /&gt;
* Dynamic Distributed Metadata Management&lt;br /&gt;
&lt;br /&gt;
** It distributes the metadata among multiple metadata servers using dynamic sub-tree partitioning, meaning folders that get used more often get their meta-data replicated to more servers, spreading the load. This happens completely automatically&lt;br /&gt;
&lt;br /&gt;
* Object based storage&lt;br /&gt;
** Uses cluster of OSDs to form a Reliable Autonomic Distributed Object-Store (RADOS) for Ceph failure detection and recovery&lt;br /&gt;
&lt;br /&gt;
* CRUSH (Controlled, Replicated, Under Scalable, Hashing)&lt;br /&gt;
** The hashing algorithm used to calculate the location of object instead of looking for them&lt;br /&gt;
** This significantly reduces the load on the MDSs because each client has enough information to independently determine where things should be located.&lt;br /&gt;
** Responsible for automatically moving data when ODSs are added or removed (can be simplified as &#039;&#039;location = CRUSH(filename) % num_servers&#039;&#039;)&lt;br /&gt;
** The CRUSH paper on Ceph’s website can be [http://ceph.com/papers/weil-crush-sc06.pdf viewed here]&lt;br /&gt;
&lt;br /&gt;
* RADOS (Reliable Autonomic Distributed Object-Store) is the object store for Ceph&lt;br /&gt;
&lt;br /&gt;
= Chubby =&lt;br /&gt;
It is a coarse grained lock service, made and used internally by Google, that serves multiple clients with small number of servers (chubby cell).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== System Components ==&lt;br /&gt;
* Chubby Cell&lt;br /&gt;
** Handles the actual locks&lt;br /&gt;
** Mainly consists of 5 servers known as replicas&lt;br /&gt;
** Consensus protocol ([https://en.wikipedia.org/wiki/Paxos_(computer_science) Paxos]) is used to elect the master from replicas&lt;br /&gt;
&lt;br /&gt;
* Client&lt;br /&gt;
** Used by programs to request and use locks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main Features ==&lt;br /&gt;
* Implemented as a semi POSIX-compliant file-system with a 256KB file limit&lt;br /&gt;
** Permissions are for files only, not folders, thus breaking some compatibility&lt;br /&gt;
** Trivial to use by programs; just use the standard &#039;&#039;fopen()&#039;&#039; family of calls&lt;br /&gt;
&lt;br /&gt;
* Uses consensus algorithm (paxos) among a set of servers to agree on who is the master that is in charge of the metadata&lt;br /&gt;
&lt;br /&gt;
* Meant for locks that last hours or days, not seconds (thus, &amp;quot;coarse grained&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
* A master server can handle tens of thousands of simultaneous connections&lt;br /&gt;
** This can be further improved by using caching servers as most of the traffic are keep-alive messages&lt;br /&gt;
&lt;br /&gt;
* Used by GFS for electing master server&lt;br /&gt;
&lt;br /&gt;
* Also used by Google as a nameserver&lt;br /&gt;
&lt;br /&gt;
== Issues ==&lt;br /&gt;
* Due to the use of Paxos (the only algorithm for this problem), the chubby cell is limited to only 5 servers. While this limits fault-tolerance, in practice this is more than enough.&lt;br /&gt;
* Since it has a file-system client interface, many programmers tend to abuse the system and need education (even inside Google)&lt;br /&gt;
* Clients need to constantly ping Chubby to verify that they still exist. This is to ensure that a server disappearing while holding a lock does not indefinitely hold that lock.&lt;br /&gt;
* Clients must also consequently re-verify that they hold a lock that they think they hold because Chubby may have timed them out.&lt;/div&gt;</summary>
		<author><name>Veenarose</name></author>
	</entry>
</feed>