<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Nelaturuk</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Nelaturuk"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Nelaturuk"/>
	<updated>2026-05-02T02:27:33Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19004</id>
		<title>DistOS 2014W Lecture 24</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19004"/>
		<updated>2014-04-08T14:39:29Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Features */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===The Landscape of Parallel Computing Research: A View from Berkeley===&lt;br /&gt;
* What sort of applications can you expect to run on distributed OS/parallize?&lt;br /&gt;
* How do you scale up&lt;br /&gt;
* We can&#039;t rely on processor improvements to provide speed-ups&lt;br /&gt;
* The proposed computational models that need more processor power don&#039;t really apply to regular&lt;br /&gt;
* Users would see the advances with games primarily&lt;br /&gt;
* More reliance in cloud computing in recent years&lt;br /&gt;
&lt;br /&gt;
==7 Dwarfs==&lt;br /&gt;
* Dense Linear Algebra&lt;br /&gt;
** Hard to parallize&lt;br /&gt;
* Sparse Linear Algebra&lt;br /&gt;
* Spectral Methods&lt;br /&gt;
* N-Body Methods&lt;br /&gt;
* Structured Grids&lt;br /&gt;
* Unstructured Grids&lt;br /&gt;
* Monte Carlo&lt;br /&gt;
==Extended Dwarfs==&lt;br /&gt;
* Combinational Logic&lt;br /&gt;
* Graph Traversal&lt;br /&gt;
* Dynamic Programming&lt;br /&gt;
* Backtrack/Branch + Bound&lt;br /&gt;
* Construct Graphical Models&lt;br /&gt;
* Finite State Machines&lt;br /&gt;
&lt;br /&gt;
===Features===&lt;br /&gt;
* Pretty impressive on getting everyone to sign off on the report&lt;br /&gt;
* Connection to MapReduce &lt;br /&gt;
* Programs that run on distributed operating systems - applications that can be expected to be massively parallel - what sort of computational model is needed - Abstractions needed on top of the stack. &lt;br /&gt;
* Predictions about the processing power&lt;br /&gt;
* GPU&#039;s do have 1000 or more cores&lt;br /&gt;
* Desktop cores have not gotten that fast over the past years. They just don&#039;t run fast enough. &lt;br /&gt;
* Games are the only things that can&#039;t be run over the time on single thread&lt;br /&gt;
* Low power &lt;br /&gt;
* Being able to run a smart phone with 100&#039;s of transistors - stalled with the sequential processing&lt;br /&gt;
* Why do we need the additional processing power for ? - Games - Games  - Games&lt;br /&gt;
* Doomsday of the IT industry &lt;br /&gt;
* Massive change in mobile and cloud over the past five years&lt;br /&gt;
* Linux a very general operating system when it started at first - hard coded to 8086 processor specialized to run on the box. Now it runs everywhere. It has all the abstractions dealing with various aspects of hardware, architecture. Multiple layers abstraction because it was useful.&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19003</id>
		<title>DistOS 2014W Lecture 24</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19003"/>
		<updated>2014-04-08T14:39:10Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Features */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===The Landscape of Parallel Computing Research: A View from Berkeley===&lt;br /&gt;
* What sort of applications can you expect to run on distributed OS/parallize?&lt;br /&gt;
* How do you scale up&lt;br /&gt;
* We can&#039;t rely on processor improvements to provide speed-ups&lt;br /&gt;
* The proposed computational models that need more processor power don&#039;t really apply to regular&lt;br /&gt;
* Users would see the advances with games primarily&lt;br /&gt;
* More reliance in cloud computing in recent years&lt;br /&gt;
&lt;br /&gt;
==7 Dwarfs==&lt;br /&gt;
* Dense Linear Algebra&lt;br /&gt;
** Hard to parallize&lt;br /&gt;
* Sparse Linear Algebra&lt;br /&gt;
* Spectral Methods&lt;br /&gt;
* N-Body Methods&lt;br /&gt;
* Structured Grids&lt;br /&gt;
* Unstructured Grids&lt;br /&gt;
* Monte Carlo&lt;br /&gt;
==Extended Dwarfs==&lt;br /&gt;
* Combinational Logic&lt;br /&gt;
* Graph Traversal&lt;br /&gt;
* Dynamic Programming&lt;br /&gt;
* Backtrack/Branch + Bound&lt;br /&gt;
* Construct Graphical Models&lt;br /&gt;
* Finite State Machines&lt;br /&gt;
&lt;br /&gt;
===Features===&lt;br /&gt;
* Pretty impressive on getting everyone to sign off on the report&lt;br /&gt;
* Connection to MapReduce &lt;br /&gt;
* Programs that run on distributed operating systems - applications that can be expected to be massively parallel - what sort of computational model is needed - Abstractions needed on top of the stack. &lt;br /&gt;
* Predictions about the processing power&lt;br /&gt;
* GPU&#039;s do have 1000 or more cores&lt;br /&gt;
* Desktop cores have not gotten that fast over the past years. They just don&#039;t run fast enough. &lt;br /&gt;
* Games are the only things that can&#039;t be run over the time on single thread&lt;br /&gt;
* Low power &lt;br /&gt;
* Being able to run a smart phone with 100&#039;s of transistors - stalled with the sequential processing&lt;br /&gt;
* Why do we need the additional processing power for ? - Games - Games  - Games&lt;br /&gt;
* Doomsday of the IT industry &lt;br /&gt;
* Massive change in mobile and cloud over the past five years&lt;br /&gt;
* Linux a very general operating system when it started at first - hard coded to 8086 processor specialized to run on the box. Now it runs everywhere. It has all the abstractions dealing with various aspects of hardware, architecture.&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19002</id>
		<title>DistOS 2014W Lecture 24</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19002"/>
		<updated>2014-04-08T14:38:31Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Features */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===The Landscape of Parallel Computing Research: A View from Berkeley===&lt;br /&gt;
* What sort of applications can you expect to run on distributed OS/parallize?&lt;br /&gt;
* How do you scale up&lt;br /&gt;
* We can&#039;t rely on processor improvements to provide speed-ups&lt;br /&gt;
* The proposed computational models that need more processor power don&#039;t really apply to regular&lt;br /&gt;
* Users would see the advances with games primarily&lt;br /&gt;
* More reliance in cloud computing in recent years&lt;br /&gt;
&lt;br /&gt;
==7 Dwarfs==&lt;br /&gt;
* Dense Linear Algebra&lt;br /&gt;
** Hard to parallize&lt;br /&gt;
* Sparse Linear Algebra&lt;br /&gt;
* Spectral Methods&lt;br /&gt;
* N-Body Methods&lt;br /&gt;
* Structured Grids&lt;br /&gt;
* Unstructured Grids&lt;br /&gt;
* Monte Carlo&lt;br /&gt;
==Extended Dwarfs==&lt;br /&gt;
* Combinational Logic&lt;br /&gt;
* Graph Traversal&lt;br /&gt;
* Dynamic Programming&lt;br /&gt;
* Backtrack/Branch + Bound&lt;br /&gt;
* Construct Graphical Models&lt;br /&gt;
* Finite State Machines&lt;br /&gt;
&lt;br /&gt;
===Features===&lt;br /&gt;
* Pretty impressive on getting everyone to sign off on the report&lt;br /&gt;
* Connection to MapReduce &lt;br /&gt;
* Programs that run on distributed operating systems - applications that can be expected to be massively parallel - what sort of computational model is needed - Abstractions needed on top of the stack. &lt;br /&gt;
* Predictions about the processing power&lt;br /&gt;
* GPU&#039;s do have 1000 or more cores&lt;br /&gt;
* Desktop cores have not gotten that fast over the past years. They just don&#039;t run fast enough. &lt;br /&gt;
* Games are the only things that can&#039;t be run over the time on single thread&lt;br /&gt;
* Low power &lt;br /&gt;
* Being able to run a smart phone with 100&#039;s of transistors - stalled with the sequential processing&lt;br /&gt;
* Why do we need the additional processing power for ? - Games - Games  - Games&lt;br /&gt;
* Doomsday of the IT industry &lt;br /&gt;
* Massive change in mobile and cloud over the past five years&lt;br /&gt;
* Linux a very general operating system when it started at first - hard coded to 8086 processor specialized to run on the box&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19001</id>
		<title>DistOS 2014W Lecture 24</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19001"/>
		<updated>2014-04-08T14:31:50Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Features */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===The Landscape of Parallel Computing Research: A View from Berkeley===&lt;br /&gt;
* What sort of applications can you expect to run on distributed OS/parallize?&lt;br /&gt;
* How do you scale up&lt;br /&gt;
* We can&#039;t rely on processor improvements to provide speed-ups&lt;br /&gt;
* The proposed computational models that need more processor power don&#039;t really apply to regular&lt;br /&gt;
* Users would see the advances with games primarily&lt;br /&gt;
* More reliance in cloud computing in recent years&lt;br /&gt;
&lt;br /&gt;
==7 Dwarfs==&lt;br /&gt;
* Dense Linear Algebra&lt;br /&gt;
** Hard to parallize&lt;br /&gt;
* Sparse Linear Algebra&lt;br /&gt;
* Spectral Methods&lt;br /&gt;
* N-Body Methods&lt;br /&gt;
* Structured Grids&lt;br /&gt;
* Unstructured Grids&lt;br /&gt;
* Monte Carlo&lt;br /&gt;
==Extended Dwarfs==&lt;br /&gt;
* Combinational Logic&lt;br /&gt;
* Graph Traversal&lt;br /&gt;
* Dynamic Programming&lt;br /&gt;
* Backtrack/Branch + Bound&lt;br /&gt;
* Construct Graphical Models&lt;br /&gt;
* Finite State Machines&lt;br /&gt;
&lt;br /&gt;
===Features===&lt;br /&gt;
* Pretty impressive on getting everyone to sign off on the report&lt;br /&gt;
* Connection to MapReduce &lt;br /&gt;
* Programs that run on distributed operating systems - applications that can be expected to be massively parallel - what sort of computational model is needed - Abstractions needed on top of the stack. &lt;br /&gt;
* Predictions about the processing power&lt;br /&gt;
* GPU&#039;s do have 1000 or more cores&lt;br /&gt;
* Desktop cores have not gotten that fast over the past years. They just don&#039;t run fast enough. &lt;br /&gt;
* Games are the only things that can&#039;t be run over the time on single thread&lt;br /&gt;
* Low power &lt;br /&gt;
* Being able to run a smart phone with 100&#039;s of transistors - stalled with the sequential processing&lt;br /&gt;
* Why do we need the additional processing power for ? - Games - Games  - Games&lt;br /&gt;
* Doomsday of the IT industry &lt;br /&gt;
* Massive change in mobile and cloud over the past five years&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19000</id>
		<title>DistOS 2014W Lecture 24</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_24&amp;diff=19000"/>
		<updated>2014-04-08T14:31:36Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===The Landscape of Parallel Computing Research: A View from Berkeley===&lt;br /&gt;
* What sort of applications can you expect to run on distributed OS/parallize?&lt;br /&gt;
* How do you scale up&lt;br /&gt;
* We can&#039;t rely on processor improvements to provide speed-ups&lt;br /&gt;
* The proposed computational models that need more processor power don&#039;t really apply to regular&lt;br /&gt;
* Users would see the advances with games primarily&lt;br /&gt;
* More reliance in cloud computing in recent years&lt;br /&gt;
&lt;br /&gt;
==7 Dwarfs==&lt;br /&gt;
* Dense Linear Algebra&lt;br /&gt;
** Hard to parallize&lt;br /&gt;
* Sparse Linear Algebra&lt;br /&gt;
* Spectral Methods&lt;br /&gt;
* N-Body Methods&lt;br /&gt;
* Structured Grids&lt;br /&gt;
* Unstructured Grids&lt;br /&gt;
* Monte Carlo&lt;br /&gt;
==Extended Dwarfs==&lt;br /&gt;
* Combinational Logic&lt;br /&gt;
* Graph Traversal&lt;br /&gt;
* Dynamic Programming&lt;br /&gt;
* Backtrack/Branch + Bound&lt;br /&gt;
* Construct Graphical Models&lt;br /&gt;
* Finite State Machines&lt;br /&gt;
&lt;br /&gt;
===Features===&lt;br /&gt;
* Pretty impressive on getting everyone to sign off on the report&lt;br /&gt;
* Connection to MapReduce &lt;br /&gt;
* Programs that run on distributed operating systems - applications that can be expected to be massively parallel - what sort of computational model is needed - Abstractions needed on top of the stack. &lt;br /&gt;
* Predictions about the processing power&lt;br /&gt;
* GPU&#039;s do have 1000 or more cores&lt;br /&gt;
* Desktop cores have not gotten that fast over the past years. They just don&#039;t run fast enough. &lt;br /&gt;
* Games are the only things that can&#039;t be run over the time on single thread&lt;br /&gt;
* Low power &lt;br /&gt;
* Being able to run a smart phone with 100&#039;s of transistors - stalled with the sequential processing&lt;br /&gt;
* Why do we need the additional processing power for ? - Games - Games  - Games&lt;br /&gt;
* Doomsday of the IT industry &lt;br /&gt;
* Massive change in mobile and cloud over the past five years&lt;br /&gt;
Dwarfs : &lt;br /&gt;
* Dense linear algebra - Sparse linear algebra - Spectral methods - Body methods - n-body methods - structured grids - unstructured grids - Monte carlo - Combinational logic - Graph traversal - Dynamic programming - Backtrack/Branch and bound - Construct graphical models - Finite state machines &lt;br /&gt;
* Of these some can be programmed parallel and some are suitable for sequential&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18978</id>
		<title>DistOS 2014W Lecture 23</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18978"/>
		<updated>2014-04-03T15:18:36Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Distributed Stream Processing - Ronak Chaudhari */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Presentations&#039;&#039;&#039;&lt;br /&gt;
===Distributed Shared Memory Systems - Mojgan===&lt;br /&gt;
* Introduction to DSM systems&lt;br /&gt;
* Advantages and Disadvantages&lt;br /&gt;
* Classification of DSM systems&lt;br /&gt;
* Design considerations&lt;br /&gt;
* Examples of DSM systems&lt;br /&gt;
 - OpenSSI&lt;br /&gt;
 - Mermaid&lt;br /&gt;
 - MOSIX&lt;br /&gt;
 - DDM&lt;br /&gt;
&lt;br /&gt;
===Survey: Fault Tolerance in Distributed File System - Mohammed===&lt;br /&gt;
* Abstract&lt;br /&gt;
* Introductions&lt;br /&gt;
** About fault tolerance in any distributed system. Comparison between different file systems. &lt;br /&gt;
** Whats more suitable for Mobile based systems. &lt;br /&gt;
** Why satisfaction high for fault tolerance is one of the main issues for DFS&#039;s ?  &lt;br /&gt;
* Replication and fault tolerance&lt;br /&gt;
** What is the Replica and Placement policy? What is the synchronization? What is its benefit? &lt;br /&gt;
   - Synchronous Method&lt;br /&gt;
   - Asynchronous Method&lt;br /&gt;
   - Semi-Asynchronous Method&lt;br /&gt;
* Cache consistency and fault tolerance&lt;br /&gt;
** What is the cache? What is its benefit? Cache consistency? &lt;br /&gt;
  - Write only Read Many (WORM)&lt;br /&gt;
  - Transactional Locking - Read and write locks&lt;br /&gt;
  - Leasing&lt;br /&gt;
* Example DFS mentioned in the paper&lt;br /&gt;
** Google File Systems&lt;br /&gt;
** HDFS &lt;br /&gt;
** MOOSEFS&lt;br /&gt;
** iRODS&lt;br /&gt;
** GlusterFS&lt;br /&gt;
** Lustre&lt;br /&gt;
** Ceph&lt;br /&gt;
** PARADISE for mobile&lt;br /&gt;
* Conclusion&lt;br /&gt;
&lt;br /&gt;
===Survey on Control Plane Frameworks for Software Defined Networking - Sijo===&lt;br /&gt;
* Introduction&lt;br /&gt;
** Traditional Networks - Control Plane and Forwarding Plane&lt;br /&gt;
** Software Defined Networking&lt;br /&gt;
 - Proposes decoupling of layers into independent layers&lt;br /&gt;
 - Network entities or nodes are specialized elements which does the forwarding &lt;br /&gt;
 - Control applications do not need to worry about installation of the underlying network&lt;br /&gt;
* Theme, Argument Outline&lt;br /&gt;
 - Look at various control frameworks proposed&lt;br /&gt;
* Controller Platforms&lt;br /&gt;
 - Centralized and Distributed approaches&lt;br /&gt;
 - Identify the need to use in controller platforms&lt;br /&gt;
 - For centralized it started with NOX - Maestro - Beacon - Floodlight - POX - OpenDayLight&lt;br /&gt;
 - For Distributed : ONIX - Hyperflow - YANC - ONOS&lt;br /&gt;
 - Leverage parallel processing capabilities&lt;br /&gt;
* In detail about two systems: &lt;br /&gt;
** ONIX&lt;br /&gt;
** ONOS &lt;br /&gt;
* References&lt;br /&gt;
&lt;br /&gt;
===Metadata management in Distributed File System - Sandarbh===&lt;br /&gt;
* What is metadata? &lt;br /&gt;
- Define by bare-minimum functions for MDS (Metadata Server)&lt;br /&gt;
- Monitor the performance of DFS so that it can be used further&lt;br /&gt;
- Structure of metadata in Paper&lt;br /&gt;
* Why is Metadata management difficult? &lt;br /&gt;
- 50% file operations are metadata operations&lt;br /&gt;
- Size of metadata&lt;br /&gt;
- Distribute the load evenly across all MDS&lt;br /&gt;
- Be able to handle thousands of clients &lt;br /&gt;
- Be able to handle file/directory permission change&lt;br /&gt;
- Recover data if some MDS goes down&lt;br /&gt;
- Be POSIX compliant&lt;br /&gt;
- Be able to scale- addition of new MDS shoudn&#039;t cause ripples&lt;br /&gt;
- Contrasting goals - replication and consistency - Average case improvements vs guaranteed performance for each access&lt;br /&gt;
* Static sub-tree partitioning&lt;br /&gt;
- Advantage - Clients know which MDS to contact for the file - Prefix caching &lt;br /&gt;
- Disadvantage - Directory hot spot formation&lt;br /&gt;
* Static hashing based partitioning &lt;br /&gt;
- Hash the filename or File identifier and assign to MDS&lt;br /&gt;
- Advantage  - Distributes load evenly - Gets rid of hotpsot info&lt;br /&gt;
- Disadvantage &lt;br /&gt;
* Don&#039;t ask me where your server is approach&lt;br /&gt;
- Ex : Ceph , GlusterFS, OceanStore, Hierarchical Bloom filters, Cassandra&lt;br /&gt;
- Responsibilities - Replica mgmt, Consistency, Access control, Recover metadata in case of crash, Talk to each other to handle the load dynamically &lt;br /&gt;
* What&#039;s not in the slides &lt;br /&gt;
- Not focused on replication of metadata&lt;br /&gt;
- Semantic based search&lt;br /&gt;
* Structure of the survey&lt;br /&gt;
- Conventional metadata systems&lt;br /&gt;
- No-metadata approach&lt;br /&gt;
- Metadata approach of the file systems designed for specific goals 0  GFS, Haystack etcs&lt;br /&gt;
- Evolution history&lt;br /&gt;
- Comparison with in ctageory&lt;br /&gt;
- Cover reliability and consistency part&lt;br /&gt;
- Summarize learnings with expected trends&lt;br /&gt;
&lt;br /&gt;
===Distributed Stream Processing - Ronak Chaudhari===&lt;br /&gt;
* About Stream processing&lt;br /&gt;
- Data streams &lt;br /&gt;
- DBMS vs Stream processing &lt;br /&gt;
* Applications&lt;br /&gt;
- Monitoring applications&lt;br /&gt;
- Militia applications&lt;br /&gt;
- Financial analysis&lt;br /&gt;
- Tracking applications&lt;br /&gt;
* Aurora &lt;br /&gt;
- Process incoming streams&lt;br /&gt;
- It has its own query algebra&lt;br /&gt;
- System Model - Query Model - Runtime Architecture &lt;br /&gt;
- QOS criteria&lt;br /&gt;
- SQuAL - Query algebra&lt;br /&gt;
- Aurora GUI&lt;br /&gt;
- Challenges in distribute operation&lt;br /&gt;
* Aurora vs Medusa&lt;br /&gt;
* Medusa&lt;br /&gt;
- Architecture&lt;br /&gt;
- Addition to Aurora - Lookup and Brain&lt;br /&gt;
- Failure detection&lt;br /&gt;
- Transfer of processing &lt;br /&gt;
- System API&lt;br /&gt;
- Load management &lt;br /&gt;
- High availability &lt;br /&gt;
- Benefits&lt;br /&gt;
* References&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18977</id>
		<title>DistOS 2014W Lecture 23</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18977"/>
		<updated>2014-04-03T15:18:18Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Ronak Chaudhari */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Presentations&#039;&#039;&#039;&lt;br /&gt;
===Distributed Shared Memory Systems - Mojgan===&lt;br /&gt;
* Introduction to DSM systems&lt;br /&gt;
* Advantages and Disadvantages&lt;br /&gt;
* Classification of DSM systems&lt;br /&gt;
* Design considerations&lt;br /&gt;
* Examples of DSM systems&lt;br /&gt;
 - OpenSSI&lt;br /&gt;
 - Mermaid&lt;br /&gt;
 - MOSIX&lt;br /&gt;
 - DDM&lt;br /&gt;
&lt;br /&gt;
===Survey: Fault Tolerance in Distributed File System - Mohammed===&lt;br /&gt;
* Abstract&lt;br /&gt;
* Introductions&lt;br /&gt;
** About fault tolerance in any distributed system. Comparison between different file systems. &lt;br /&gt;
** Whats more suitable for Mobile based systems. &lt;br /&gt;
** Why satisfaction high for fault tolerance is one of the main issues for DFS&#039;s ?  &lt;br /&gt;
* Replication and fault tolerance&lt;br /&gt;
** What is the Replica and Placement policy? What is the synchronization? What is its benefit? &lt;br /&gt;
   - Synchronous Method&lt;br /&gt;
   - Asynchronous Method&lt;br /&gt;
   - Semi-Asynchronous Method&lt;br /&gt;
* Cache consistency and fault tolerance&lt;br /&gt;
** What is the cache? What is its benefit? Cache consistency? &lt;br /&gt;
  - Write only Read Many (WORM)&lt;br /&gt;
  - Transactional Locking - Read and write locks&lt;br /&gt;
  - Leasing&lt;br /&gt;
* Example DFS mentioned in the paper&lt;br /&gt;
** Google File Systems&lt;br /&gt;
** HDFS &lt;br /&gt;
** MOOSEFS&lt;br /&gt;
** iRODS&lt;br /&gt;
** GlusterFS&lt;br /&gt;
** Lustre&lt;br /&gt;
** Ceph&lt;br /&gt;
** PARADISE for mobile&lt;br /&gt;
* Conclusion&lt;br /&gt;
&lt;br /&gt;
===Survey on Control Plane Frameworks for Software Defined Networking - Sijo===&lt;br /&gt;
* Introduction&lt;br /&gt;
** Traditional Networks - Control Plane and Forwarding Plane&lt;br /&gt;
** Software Defined Networking&lt;br /&gt;
 - Proposes decoupling of layers into independent layers&lt;br /&gt;
 - Network entities or nodes are specialized elements which does the forwarding &lt;br /&gt;
 - Control applications do not need to worry about installation of the underlying network&lt;br /&gt;
* Theme, Argument Outline&lt;br /&gt;
 - Look at various control frameworks proposed&lt;br /&gt;
* Controller Platforms&lt;br /&gt;
 - Centralized and Distributed approaches&lt;br /&gt;
 - Identify the need to use in controller platforms&lt;br /&gt;
 - For centralized it started with NOX - Maestro - Beacon - Floodlight - POX - OpenDayLight&lt;br /&gt;
 - For Distributed : ONIX - Hyperflow - YANC - ONOS&lt;br /&gt;
 - Leverage parallel processing capabilities&lt;br /&gt;
* In detail about two systems: &lt;br /&gt;
** ONIX&lt;br /&gt;
** ONOS &lt;br /&gt;
* References&lt;br /&gt;
&lt;br /&gt;
===Metadata management in Distributed File System - Sandarbh===&lt;br /&gt;
* What is metadata? &lt;br /&gt;
- Define by bare-minimum functions for MDS (Metadata Server)&lt;br /&gt;
- Monitor the performance of DFS so that it can be used further&lt;br /&gt;
- Structure of metadata in Paper&lt;br /&gt;
* Why is Metadata management difficult? &lt;br /&gt;
- 50% file operations are metadata operations&lt;br /&gt;
- Size of metadata&lt;br /&gt;
- Distribute the load evenly across all MDS&lt;br /&gt;
- Be able to handle thousands of clients &lt;br /&gt;
- Be able to handle file/directory permission change&lt;br /&gt;
- Recover data if some MDS goes down&lt;br /&gt;
- Be POSIX compliant&lt;br /&gt;
- Be able to scale- addition of new MDS shoudn&#039;t cause ripples&lt;br /&gt;
- Contrasting goals - replication and consistency - Average case improvements vs guaranteed performance for each access&lt;br /&gt;
* Static sub-tree partitioning&lt;br /&gt;
- Advantage - Clients know which MDS to contact for the file - Prefix caching &lt;br /&gt;
- Disadvantage - Directory hot spot formation&lt;br /&gt;
* Static hashing based partitioning &lt;br /&gt;
- Hash the filename or File identifier and assign to MDS&lt;br /&gt;
- Advantage  - Distributes load evenly - Gets rid of hotpsot info&lt;br /&gt;
- Disadvantage &lt;br /&gt;
* Don&#039;t ask me where your server is approach&lt;br /&gt;
- Ex : Ceph , GlusterFS, OceanStore, Hierarchical Bloom filters, Cassandra&lt;br /&gt;
- Responsibilities - Replica mgmt, Consistency, Access control, Recover metadata in case of crash, Talk to each other to handle the load dynamically &lt;br /&gt;
* What&#039;s not in the slides &lt;br /&gt;
- Not focused on replication of metadata&lt;br /&gt;
- Semantic based search&lt;br /&gt;
* Structure of the survey&lt;br /&gt;
- Conventional metadata systems&lt;br /&gt;
- No-metadata approach&lt;br /&gt;
- Metadata approach of the file systems designed for specific goals 0  GFS, Haystack etcs&lt;br /&gt;
- Evolution history&lt;br /&gt;
- Comparison with in ctageory&lt;br /&gt;
- Cover reliability and consistency part&lt;br /&gt;
- Summarize learnings with expected trends&lt;br /&gt;
&lt;br /&gt;
===Distributed Stream Processing - Ronak Chaudhari===&lt;br /&gt;
* About Stream processing&lt;br /&gt;
- Data streams &lt;br /&gt;
- DBMS vs Stream processing &lt;br /&gt;
* Applications&lt;br /&gt;
- Monitoring applications&lt;br /&gt;
- Militia applications&lt;br /&gt;
- Financial analysis&lt;br /&gt;
- Tracking applications&lt;br /&gt;
* Aurora &lt;br /&gt;
- Process incoming streams&lt;br /&gt;
- It has its own query algebra&lt;br /&gt;
- System Model - Query Model - Runtime Architecture &lt;br /&gt;
- QOS criteria&lt;br /&gt;
- SQuAL - Query algebra&lt;br /&gt;
- Aurora GUI&lt;br /&gt;
- Challenges in distribute operation&lt;br /&gt;
* Aurora vs Medusa&lt;br /&gt;
* Medusa&lt;br /&gt;
- Architecture&lt;br /&gt;
- Addition to Aurora - Lookup and Brain&lt;br /&gt;
- Failure detection&lt;br /&gt;
- Transfer of processing &lt;br /&gt;
- System API&lt;br /&gt;
- Load management &lt;br /&gt;
- High availability &lt;br /&gt;
- Benefits&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18976</id>
		<title>DistOS 2014W Lecture 23</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18976"/>
		<updated>2014-04-03T15:05:08Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Sandarbh */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Presentations&#039;&#039;&#039;&lt;br /&gt;
===Distributed Shared Memory Systems - Mojgan===&lt;br /&gt;
* Introduction to DSM systems&lt;br /&gt;
* Advantages and Disadvantages&lt;br /&gt;
* Classification of DSM systems&lt;br /&gt;
* Design considerations&lt;br /&gt;
* Examples of DSM systems&lt;br /&gt;
 - OpenSSI&lt;br /&gt;
 - Mermaid&lt;br /&gt;
 - MOSIX&lt;br /&gt;
 - DDM&lt;br /&gt;
&lt;br /&gt;
===Survey: Fault Tolerance in Distributed File System - Mohammed===&lt;br /&gt;
* Abstract&lt;br /&gt;
* Introductions&lt;br /&gt;
** About fault tolerance in any distributed system. Comparison between different file systems. &lt;br /&gt;
** Whats more suitable for Mobile based systems. &lt;br /&gt;
** Why satisfaction high for fault tolerance is one of the main issues for DFS&#039;s ?  &lt;br /&gt;
* Replication and fault tolerance&lt;br /&gt;
** What is the Replica and Placement policy? What is the synchronization? What is its benefit? &lt;br /&gt;
   - Synchronous Method&lt;br /&gt;
   - Asynchronous Method&lt;br /&gt;
   - Semi-Asynchronous Method&lt;br /&gt;
* Cache consistency and fault tolerance&lt;br /&gt;
** What is the cache? What is its benefit? Cache consistency? &lt;br /&gt;
  - Write only Read Many (WORM)&lt;br /&gt;
  - Transactional Locking - Read and write locks&lt;br /&gt;
  - Leasing&lt;br /&gt;
* Example DFS mentioned in the paper&lt;br /&gt;
** Google File Systems&lt;br /&gt;
** HDFS &lt;br /&gt;
** MOOSEFS&lt;br /&gt;
** iRODS&lt;br /&gt;
** GlusterFS&lt;br /&gt;
** Lustre&lt;br /&gt;
** Ceph&lt;br /&gt;
** PARADISE for mobile&lt;br /&gt;
* Conclusion&lt;br /&gt;
&lt;br /&gt;
===Survey on Control Plane Frameworks for Software Defined Networking - Sijo===&lt;br /&gt;
* Introduction&lt;br /&gt;
** Traditional Networks - Control Plane and Forwarding Plane&lt;br /&gt;
** Software Defined Networking&lt;br /&gt;
 - Proposes decoupling of layers into independent layers&lt;br /&gt;
 - Network entities or nodes are specialized elements which does the forwarding &lt;br /&gt;
 - Control applications do not need to worry about installation of the underlying network&lt;br /&gt;
* Theme, Argument Outline&lt;br /&gt;
 - Look at various control frameworks proposed&lt;br /&gt;
* Controller Platforms&lt;br /&gt;
 - Centralized and Distributed approaches&lt;br /&gt;
 - Identify the need to use in controller platforms&lt;br /&gt;
 - For centralized it started with NOX - Maestro - Beacon - Floodlight - POX - OpenDayLight&lt;br /&gt;
 - For Distributed : ONIX - Hyperflow - YANC - ONOS&lt;br /&gt;
 - Leverage parallel processing capabilities&lt;br /&gt;
* In detail about two systems: &lt;br /&gt;
** ONIX&lt;br /&gt;
** ONOS &lt;br /&gt;
* References&lt;br /&gt;
&lt;br /&gt;
===Metadata management in Distributed File System - Sandarbh===&lt;br /&gt;
* What is metadata? &lt;br /&gt;
- Define by bare-minimum functions for MDS (Metadata Server)&lt;br /&gt;
- Monitor the performance of DFS so that it can be used further&lt;br /&gt;
- Structure of metadata in Paper&lt;br /&gt;
* Why is Metadata management difficult? &lt;br /&gt;
- 50% file operations are metadata operations&lt;br /&gt;
- Size of metadata&lt;br /&gt;
- Distribute the load evenly across all MDS&lt;br /&gt;
- Be able to handle thousands of clients &lt;br /&gt;
- Be able to handle file/directory permission change&lt;br /&gt;
- Recover data if some MDS goes down&lt;br /&gt;
- Be POSIX compliant&lt;br /&gt;
- Be able to scale- addition of new MDS shoudn&#039;t cause ripples&lt;br /&gt;
- Contrasting goals - replication and consistency - Average case improvements vs guaranteed performance for each access&lt;br /&gt;
* Static sub-tree partitioning&lt;br /&gt;
- Advantage - Clients know which MDS to contact for the file - Prefix caching &lt;br /&gt;
- Disadvantage - Directory hot spot formation&lt;br /&gt;
* Static hashing based partitioning &lt;br /&gt;
- Hash the filename or File identifier and assign to MDS&lt;br /&gt;
- Advantage  - Distributes load evenly - Gets rid of hotpsot info&lt;br /&gt;
- Disadvantage &lt;br /&gt;
* Don&#039;t ask me where your server is approach&lt;br /&gt;
- Ex : Ceph , GlusterFS, OceanStore, Hierarchical Bloom filters, Cassandra&lt;br /&gt;
- Responsibilities - Replica mgmt, Consistency, Access control, Recover metadata in case of crash, Talk to each other to handle the load dynamically &lt;br /&gt;
* What&#039;s not in the slides &lt;br /&gt;
- Not focused on replication of metadata&lt;br /&gt;
- Semantic based search&lt;br /&gt;
* Structure of the survey&lt;br /&gt;
- Conventional metadata systems&lt;br /&gt;
- No-metadata approach&lt;br /&gt;
- Metadata approach of the file systems designed for specific goals 0  GFS, Haystack etcs&lt;br /&gt;
- Evolution history&lt;br /&gt;
- Comparison with in ctageory&lt;br /&gt;
- Cover reliability and consistency part&lt;br /&gt;
- Summarize learnings with expected trends&lt;br /&gt;
&lt;br /&gt;
===Ronak Chaudhari===&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18975</id>
		<title>DistOS 2014W Lecture 23</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18975"/>
		<updated>2014-04-03T14:50:12Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Survey on Control Plane Frameworks for Software Defined Networking - Sijo */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Presentations&#039;&#039;&#039;&lt;br /&gt;
===Distributed Shared Memory Systems - Mojgan===&lt;br /&gt;
* Introduction to DSM systems&lt;br /&gt;
* Advantages and Disadvantages&lt;br /&gt;
* Classification of DSM systems&lt;br /&gt;
* Design considerations&lt;br /&gt;
* Examples of DSM systems&lt;br /&gt;
 - OpenSSI&lt;br /&gt;
 - Mermaid&lt;br /&gt;
 - MOSIX&lt;br /&gt;
 - DDM&lt;br /&gt;
&lt;br /&gt;
===Survey: Fault Tolerance in Distributed File System - Mohammed===&lt;br /&gt;
* Abstract&lt;br /&gt;
* Introductions&lt;br /&gt;
** About fault tolerance in any distributed system. Comparison between different file systems. &lt;br /&gt;
** Whats more suitable for Mobile based systems. &lt;br /&gt;
** Why satisfaction high for fault tolerance is one of the main issues for DFS&#039;s ?  &lt;br /&gt;
* Replication and fault tolerance&lt;br /&gt;
** What is the Replica and Placement policy? What is the synchronization? What is its benefit? &lt;br /&gt;
   - Synchronous Method&lt;br /&gt;
   - Asynchronous Method&lt;br /&gt;
   - Semi-Asynchronous Method&lt;br /&gt;
* Cache consistency and fault tolerance&lt;br /&gt;
** What is the cache? What is its benefit? Cache consistency? &lt;br /&gt;
  - Write only Read Many (WORM)&lt;br /&gt;
  - Transactional Locking - Read and write locks&lt;br /&gt;
  - Leasing&lt;br /&gt;
* Example DFS mentioned in the paper&lt;br /&gt;
** Google File Systems&lt;br /&gt;
** HDFS &lt;br /&gt;
** MOOSEFS&lt;br /&gt;
** iRODS&lt;br /&gt;
** GlusterFS&lt;br /&gt;
** Lustre&lt;br /&gt;
** Ceph&lt;br /&gt;
** PARADISE for mobile&lt;br /&gt;
* Conclusion&lt;br /&gt;
&lt;br /&gt;
===Survey on Control Plane Frameworks for Software Defined Networking - Sijo===&lt;br /&gt;
* Introduction&lt;br /&gt;
** Traditional Networks - Control Plane and Forwarding Plane&lt;br /&gt;
** Software Defined Networking&lt;br /&gt;
 - Proposes decoupling of layers into independent layers&lt;br /&gt;
 - Network entities or nodes are specialized elements which does the forwarding &lt;br /&gt;
 - Control applications do not need to worry about installation of the underlying network&lt;br /&gt;
* Theme, Argument Outline&lt;br /&gt;
 - Look at various control frameworks proposed&lt;br /&gt;
* Controller Platforms&lt;br /&gt;
 - Centralized and Distributed approaches&lt;br /&gt;
 - Identify the need to use in controller platforms&lt;br /&gt;
 - For centralized it started with NOX - Maestro - Beacon - Floodlight - POX - OpenDayLight&lt;br /&gt;
 - For Distributed : ONIX - Hyperflow - YANC - ONOS&lt;br /&gt;
 - Leverage parallel processing capabilities&lt;br /&gt;
* In detail about two systems: &lt;br /&gt;
** ONIX&lt;br /&gt;
** ONOS &lt;br /&gt;
* References&lt;br /&gt;
&lt;br /&gt;
===Sandarbh===&lt;br /&gt;
===Ronak Chaudhari===&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18974</id>
		<title>DistOS 2014W Lecture 23</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18974"/>
		<updated>2014-04-03T14:47:34Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Sijo */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Presentations&#039;&#039;&#039;&lt;br /&gt;
===Distributed Shared Memory Systems - Mojgan===&lt;br /&gt;
* Introduction to DSM systems&lt;br /&gt;
* Advantages and Disadvantages&lt;br /&gt;
* Classification of DSM systems&lt;br /&gt;
* Design considerations&lt;br /&gt;
* Examples of DSM systems&lt;br /&gt;
 - OpenSSI&lt;br /&gt;
 - Mermaid&lt;br /&gt;
 - MOSIX&lt;br /&gt;
 - DDM&lt;br /&gt;
&lt;br /&gt;
===Survey: Fault Tolerance in Distributed File System - Mohammed===&lt;br /&gt;
* Abstract&lt;br /&gt;
* Introductions&lt;br /&gt;
** About fault tolerance in any distributed system. Comparison between different file systems. &lt;br /&gt;
** Whats more suitable for Mobile based systems. &lt;br /&gt;
** Why satisfaction high for fault tolerance is one of the main issues for DFS&#039;s ?  &lt;br /&gt;
* Replication and fault tolerance&lt;br /&gt;
** What is the Replica and Placement policy? What is the synchronization? What is its benefit? &lt;br /&gt;
   - Synchronous Method&lt;br /&gt;
   - Asynchronous Method&lt;br /&gt;
   - Semi-Asynchronous Method&lt;br /&gt;
* Cache consistency and fault tolerance&lt;br /&gt;
** What is the cache? What is its benefit? Cache consistency? &lt;br /&gt;
  - Write only Read Many (WORM)&lt;br /&gt;
  - Transactional Locking - Read and write locks&lt;br /&gt;
  - Leasing&lt;br /&gt;
* Example DFS mentioned in the paper&lt;br /&gt;
** Google File Systems&lt;br /&gt;
** HDFS &lt;br /&gt;
** MOOSEFS&lt;br /&gt;
** iRODS&lt;br /&gt;
** GlusterFS&lt;br /&gt;
** Lustre&lt;br /&gt;
** Ceph&lt;br /&gt;
** PARADISE for mobile&lt;br /&gt;
* Conclusion&lt;br /&gt;
&lt;br /&gt;
===Survey on Control Plane Frameworks for Software Defined Networking - Sijo===&lt;br /&gt;
* Introduction&lt;br /&gt;
** Traditional Networks - Control Plane and Forwarding Plane&lt;br /&gt;
** Software Defined Networking&lt;br /&gt;
 - Proposes decoupling of layers into independent layers&lt;br /&gt;
 - Network entities or nodes are specialized elements which does the forwarding &lt;br /&gt;
 - Control applications do not need to worry about installation of the underlying network&lt;br /&gt;
* Theme, Argument Outline&lt;br /&gt;
 - Look at various control frameworks proposed&lt;br /&gt;
* Controller Platforms&lt;br /&gt;
 - Centralized and Distributed approaches&lt;br /&gt;
 - Identify the need to use in controller platforms&lt;br /&gt;
 - For centralized it started with NOX - Maestro - Beacon - Floodlight - POX - OpenDayLight&lt;br /&gt;
 - For Distributed : ONIX - Hyperflow - YANC - ONOS&lt;br /&gt;
 - Leverage parallel processing capabilities&lt;br /&gt;
* In detail about two systems: &lt;br /&gt;
** ONIX&lt;br /&gt;
** ONOS&lt;br /&gt;
* References&lt;br /&gt;
&lt;br /&gt;
===Sandarbh===&lt;br /&gt;
===Ronak Chaudhari===&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18973</id>
		<title>DistOS 2014W Lecture 23</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18973"/>
		<updated>2014-04-03T14:36:03Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Survey: Fault Tolerance in Distributed File System - Mohammed */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Presentations&#039;&#039;&#039;&lt;br /&gt;
===Distributed Shared Memory Systems - Mojgan===&lt;br /&gt;
* Introduction to DSM systems&lt;br /&gt;
* Advantages and Disadvantages&lt;br /&gt;
* Classification of DSM systems&lt;br /&gt;
* Design considerations&lt;br /&gt;
* Examples of DSM systems&lt;br /&gt;
 - OpenSSI&lt;br /&gt;
 - Mermaid&lt;br /&gt;
 - MOSIX&lt;br /&gt;
 - DDM&lt;br /&gt;
&lt;br /&gt;
===Survey: Fault Tolerance in Distributed File System - Mohammed===&lt;br /&gt;
* Abstract&lt;br /&gt;
* Introductions&lt;br /&gt;
** About fault tolerance in any distributed system. Comparison between different file systems. &lt;br /&gt;
** Whats more suitable for Mobile based systems. &lt;br /&gt;
** Why satisfaction high for fault tolerance is one of the main issues for DFS&#039;s ?  &lt;br /&gt;
* Replication and fault tolerance&lt;br /&gt;
** What is the Replica and Placement policy? What is the synchronization? What is its benefit? &lt;br /&gt;
   - Synchronous Method&lt;br /&gt;
   - Asynchronous Method&lt;br /&gt;
   - Semi-Asynchronous Method&lt;br /&gt;
* Cache consistency and fault tolerance&lt;br /&gt;
** What is the cache? What is its benefit? Cache consistency? &lt;br /&gt;
  - Write only Read Many (WORM)&lt;br /&gt;
  - Transactional Locking - Read and write locks&lt;br /&gt;
  - Leasing&lt;br /&gt;
* Example DFS mentioned in the paper&lt;br /&gt;
** Google File Systems&lt;br /&gt;
** HDFS &lt;br /&gt;
** MOOSEFS&lt;br /&gt;
** iRODS&lt;br /&gt;
** GlusterFS&lt;br /&gt;
** Lustre&lt;br /&gt;
** Ceph&lt;br /&gt;
** PARADISE for mobile&lt;br /&gt;
* Conclusion&lt;br /&gt;
&lt;br /&gt;
===Sijo===&lt;br /&gt;
===Sandarbh===&lt;br /&gt;
===Ronak Chaudhari===&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18972</id>
		<title>DistOS 2014W Lecture 23</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18972"/>
		<updated>2014-04-03T14:31:52Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Survey: Fault Tolerance in Distributed File System - Mohammed */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Presentations&#039;&#039;&#039;&lt;br /&gt;
===Distributed Shared Memory Systems - Mojgan===&lt;br /&gt;
* Introduction to DSM systems&lt;br /&gt;
* Advantages and Disadvantages&lt;br /&gt;
* Classification of DSM systems&lt;br /&gt;
* Design considerations&lt;br /&gt;
* Examples of DSM systems&lt;br /&gt;
 - OpenSSI&lt;br /&gt;
 - Mermaid&lt;br /&gt;
 - MOSIX&lt;br /&gt;
 - DDM&lt;br /&gt;
&lt;br /&gt;
===Survey: Fault Tolerance in Distributed File System - Mohammed===&lt;br /&gt;
* Abstract&lt;br /&gt;
* Introductions&lt;br /&gt;
** About fault tolerance in any distributed system. Comparison between different file systems. &lt;br /&gt;
** Whats more suitable for Mobile based systems. &lt;br /&gt;
** Why satisfaction high for fault tolerance is one of the main issues for DFS&#039;s ?  &lt;br /&gt;
* Replication and fault tolerance&lt;br /&gt;
** What is the Replica and Placement policy? What is the synchronization? What is its benefit? &lt;br /&gt;
   - Synchronous Method&lt;br /&gt;
   - Asynchronous Method&lt;br /&gt;
   - Semi-Asynchronous Method&lt;br /&gt;
* Cache consistency and fault tolerance&lt;br /&gt;
** What is the cache? What is its benefit? Cache consistency? &lt;br /&gt;
  - Write only Read Many (WORM)&lt;br /&gt;
  - Transactional Locking&lt;br /&gt;
  - Leasing&lt;br /&gt;
* Conclusion&lt;br /&gt;
&lt;br /&gt;
===Sijo===&lt;br /&gt;
===Sandarbh===&lt;br /&gt;
===Ronak Chaudhari===&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18971</id>
		<title>DistOS 2014W Lecture 23</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18971"/>
		<updated>2014-04-03T14:27:58Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Mohammed */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Presentations&#039;&#039;&#039;&lt;br /&gt;
===Distributed Shared Memory Systems - Mojgan===&lt;br /&gt;
* Introduction to DSM systems&lt;br /&gt;
* Advantages and Disadvantages&lt;br /&gt;
* Classification of DSM systems&lt;br /&gt;
* Design considerations&lt;br /&gt;
* Examples of DSM systems&lt;br /&gt;
 - OpenSSI&lt;br /&gt;
 - Mermaid&lt;br /&gt;
 - MOSIX&lt;br /&gt;
 - DDM&lt;br /&gt;
&lt;br /&gt;
===Survey: Fault Tolerance in Distributed File System - Mohammed===&lt;br /&gt;
* Abstract&lt;br /&gt;
* Introductions&lt;br /&gt;
** About fault tolerance in any distributed system. Comparison between different file systems. Whats more suitable for Mobile based systems. Why satisfaction high for fault tolerance is one of the main issues for DFS&#039;s ?  &lt;br /&gt;
&lt;br /&gt;
* Replication and fault tolerance&lt;br /&gt;
* Cache consistency and fault tolerance&lt;br /&gt;
* Conclusion&lt;br /&gt;
&lt;br /&gt;
===Sijo===&lt;br /&gt;
===Sandarbh===&lt;br /&gt;
===Ronak Chaudhari===&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18970</id>
		<title>DistOS 2014W Lecture 23</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18970"/>
		<updated>2014-04-03T14:23:50Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Mojgan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Presentations&#039;&#039;&#039;&lt;br /&gt;
===Distributed Shared Memory Systems - Mojgan===&lt;br /&gt;
* Introduction to DSM systems&lt;br /&gt;
* Advantages and Disadvantages&lt;br /&gt;
* Classification of DSM systems&lt;br /&gt;
* Design considerations&lt;br /&gt;
* Examples of DSM systems&lt;br /&gt;
 - OpenSSI&lt;br /&gt;
 - Mermaid&lt;br /&gt;
 - MOSIX&lt;br /&gt;
 - DDM&lt;br /&gt;
&lt;br /&gt;
===Mohammed===&lt;br /&gt;
===Sijo===&lt;br /&gt;
===Sandarbh===&lt;br /&gt;
===Ronak Chaudhari===&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18969</id>
		<title>DistOS 2014W Lecture 23</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18969"/>
		<updated>2014-04-03T14:01:24Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Presentations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Presentations&#039;&#039;&#039;&lt;br /&gt;
===Mojgan===&lt;br /&gt;
===Mohammed===&lt;br /&gt;
===Sijo===&lt;br /&gt;
===Sandarbh===&lt;br /&gt;
===Ronak Chaudhari===&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18968</id>
		<title>DistOS 2014W Lecture 23</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18968"/>
		<updated>2014-04-03T13:59:55Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Mohammed */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Presentations===&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18967</id>
		<title>DistOS 2014W Lecture 23</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18967"/>
		<updated>2014-04-03T13:59:48Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Sijo */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Presentations===&lt;br /&gt;
===Mohammed===&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18966</id>
		<title>DistOS 2014W Lecture 23</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18966"/>
		<updated>2014-04-03T13:59:40Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Sandarbh */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Presentations===&lt;br /&gt;
===Mohammed===&lt;br /&gt;
===Sijo===&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18965</id>
		<title>DistOS 2014W Lecture 23</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18965"/>
		<updated>2014-04-03T13:59:33Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* =Ronak Chaudhari */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Presentations===&lt;br /&gt;
===Mohammed===&lt;br /&gt;
===Sijo===&lt;br /&gt;
===Sandarbh===&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18964</id>
		<title>DistOS 2014W Lecture 23</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18964"/>
		<updated>2014-04-03T13:59:25Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Mojgan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Presentations===&lt;br /&gt;
===Mohammed===&lt;br /&gt;
===Sijo===&lt;br /&gt;
===Sandarbh===&lt;br /&gt;
===Ronak Chaudhari==&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18963</id>
		<title>DistOS 2014W Lecture 23</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_23&amp;diff=18963"/>
		<updated>2014-04-03T13:58:55Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: Created page with &amp;quot;===Presentations=== ===Mojgan=== ===Mohammed=== ===Sijo=== ===Sandarbh=== ===Ronak Chaudhari==&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Presentations===&lt;br /&gt;
===Mojgan===&lt;br /&gt;
===Mohammed===&lt;br /&gt;
===Sijo===&lt;br /&gt;
===Sandarbh===&lt;br /&gt;
===Ronak Chaudhari==&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_22&amp;diff=18961</id>
		<title>DistOS 2014W Lecture 22</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_22&amp;diff=18961"/>
		<updated>2014-04-02T19:09:18Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Object Based Storage Systems - Keerthi Nelaturu */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Presentations===&lt;br /&gt;
===Gehana===&lt;br /&gt;
&lt;br /&gt;
===Object Based Storage Systems - Keerthi Nelaturu===&lt;br /&gt;
&#039;&#039;&#039;Abstract&#039;&#039;&#039;&lt;br /&gt;
Object-based storage implementations have been there since 1990’s. Mostly this architecture is used for developing scalable distributed file systems. It is surprising that we Object Oriented concepts can be used even in storage systems. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Project Report Outline&#039;&#039;&#039;&lt;br /&gt;
This paper starts with a comparison on why we would prefer object-based storage systems to File or Block storage systems. A basic architecture will be presented discussing the common elements like security, load balancing, object attributes management methodology etc. that are part of this storage system. This will include an analysis on some of the other object storage systems like HYDRA, LUSTRE, SOSS, OBFS and Differstore.&lt;br /&gt;
Further into the paper, we would concentrate more on how object-based storage is useful for implementing massive image processing systems like Haystack and OBSI.&lt;br /&gt;
&lt;br /&gt;
===Simon &amp;amp; Peter===&lt;br /&gt;
===Adam===&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_22&amp;diff=18960</id>
		<title>DistOS 2014W Lecture 22</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_22&amp;diff=18960"/>
		<updated>2014-04-02T19:08:49Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Bo */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Presentations===&lt;br /&gt;
===Gehana===&lt;br /&gt;
&lt;br /&gt;
===Object Based Storage Systems - Keerthi Nelaturu===&lt;br /&gt;
&#039;&#039;&#039;Abstract&#039;&#039;&#039;&lt;br /&gt;
Object-based storage implementations have been there since 1990’s. Mostly this architecture is used for developing scalable distributed file systems. It is surprising that we Object Oriented concepts can be used even in storage systems. &lt;br /&gt;
&lt;br /&gt;
Project Report Outline: &lt;br /&gt;
This paper starts with a comparison on why we would prefer object-based storage systems to File or Block storage systems. A basic architecture will be presented discussing the common elements like security, load balancing, object attributes management methodology etc. that are part of this storage system. This will include an analysis on some of the other object storage systems like HYDRA, LUSTRE, SOSS, OBFS and Differstore.&lt;br /&gt;
Further into the paper, we would concentrate more on how object-based storage is useful for implementing massive image processing systems like Haystack and OBSI.&lt;br /&gt;
&lt;br /&gt;
===Simon &amp;amp; Peter===&lt;br /&gt;
===Adam===&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_22&amp;diff=18959</id>
		<title>DistOS 2014W Lecture 22</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_22&amp;diff=18959"/>
		<updated>2014-04-02T19:08:15Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Object Based Storage Systems - Keerthi Nelaturu */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Presentations===&lt;br /&gt;
===Gehana===&lt;br /&gt;
&lt;br /&gt;
===Object Based Storage Systems - Keerthi Nelaturu===&lt;br /&gt;
&#039;&#039;&#039;Abstract&#039;&#039;&#039;&lt;br /&gt;
Object-based storage implementations have been there since 1990’s. Mostly this architecture is used for developing scalable distributed file systems. It is surprising that we Object Oriented concepts can be used even in storage systems. &lt;br /&gt;
&lt;br /&gt;
Project Report Outline: &lt;br /&gt;
This paper starts with a comparison on why we would prefer object-based storage systems to File or Block storage systems. A basic architecture will be presented discussing the common elements like security, load balancing, object attributes management methodology etc. that are part of this storage system. This will include an analysis on some of the other object storage systems like HYDRA, LUSTRE, SOSS, OBFS and Differstore.&lt;br /&gt;
Further into the paper, we would concentrate more on how object-based storage is useful for implementing massive image processing systems like Haystack and OBSI.&lt;br /&gt;
&lt;br /&gt;
===Simon &amp;amp; Peter===&lt;br /&gt;
===Bo===&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_22&amp;diff=18958</id>
		<title>DistOS 2014W Lecture 22</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_22&amp;diff=18958"/>
		<updated>2014-04-02T19:07:43Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Presentations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Presentations===&lt;br /&gt;
===Gehana===&lt;br /&gt;
&lt;br /&gt;
===Object Based Storage Systems - Keerthi Nelaturu===&lt;br /&gt;
&#039;&#039;&#039;Abstract&#039;&#039;&#039;&lt;br /&gt;
Object-based storage implementations have been there since 1990’s. Mostly this architecture is used for developing scalable distributed file systems. It is surprising that we Object Oriented concepts can be used even in storage systems. &lt;br /&gt;
&lt;br /&gt;
Project Report Outline: &lt;br /&gt;
This paper starts with a comparison on why we would prefer object-based storage systems to File or Block storage systems. A basic architecture will be presented discussing the common elements like security, load balancing, object attributes management methodology etc. that are part of this storage system. This will include an analysis on some of the other object storage systems like HYDRA, LUSTRE, SOSS, OBFS and Differstore.&lt;br /&gt;
Further into the paper, we would concentrate more on how object-based storage is useful for implementing massive image processing systems like Haystack and OBSI.&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_22&amp;diff=18957</id>
		<title>DistOS 2014W Lecture 22</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_22&amp;diff=18957"/>
		<updated>2014-04-02T19:07:20Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Presentations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Presentations===&lt;br /&gt;
===Object Based Storage Systems - Keerthi Nelaturu===&lt;br /&gt;
&#039;&#039;&#039;Abstract&#039;&#039;&#039;&lt;br /&gt;
Object-based storage implementations have been there since 1990’s. Mostly this architecture is used for developing scalable distributed file systems. It is surprising that we Object Oriented concepts can be used even in storage systems. &lt;br /&gt;
&lt;br /&gt;
Project Report Outline: &lt;br /&gt;
This paper starts with a comparison on why we would prefer object-based storage systems to File or Block storage systems. A basic architecture will be presented discussing the common elements like security, load balancing, object attributes management methodology etc. that are part of this storage system. This will include an analysis on some of the other object storage systems like HYDRA, LUSTRE, SOSS, OBFS and Differstore.&lt;br /&gt;
Further into the paper, we would concentrate more on how object-based storage is useful for implementing massive image processing systems like Haystack and OBSI.&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_22&amp;diff=18956</id>
		<title>DistOS 2014W Lecture 22</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_22&amp;diff=18956"/>
		<updated>2014-04-02T19:05:17Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: Created page with &amp;quot;===Presentations===&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Presentations===&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_21&amp;diff=18945</id>
		<title>DistOS 2014W Lecture 21</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_21&amp;diff=18945"/>
		<updated>2014-03-27T21:01:15Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Test */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Presentation ==&lt;br /&gt;
&lt;br /&gt;
=== Marking ===&lt;br /&gt;
&lt;br /&gt;
* marked mostly on presentation, not content&lt;br /&gt;
* basically we want to communicate the basic structure of the paper, and do so in a way that isn&#039;t boring&lt;br /&gt;
&lt;br /&gt;
=== Content ===&lt;br /&gt;
&lt;br /&gt;
* concrete, not &amp;quot;head in the clouds&amp;quot;&lt;br /&gt;
* present the area&lt;br /&gt;
* compare and contrast the papers&lt;br /&gt;
* 10 minutes talk, 5 minutes feedback&lt;br /&gt;
* basic argument&lt;br /&gt;
* basic references&lt;br /&gt;
&lt;br /&gt;
=== Form ===&lt;br /&gt;
&lt;br /&gt;
* show the work we&#039;ve done on paper&lt;br /&gt;
* try to get feedback&lt;br /&gt;
* think of it as a rough draft&lt;br /&gt;
* try to get people to read the paper&lt;br /&gt;
* enthusiasm&lt;br /&gt;
* powerpoints are easier&lt;br /&gt;
* don&#039;t read slides&lt;br /&gt;
* no whole sentences on slides&lt;br /&gt;
* look at talks by Mark Shuttleworth&lt;br /&gt;
&lt;br /&gt;
== MapReduce ==&lt;br /&gt;
&lt;br /&gt;
A clever observation that a simple solution could solve most distributed problems.  It&#039;s all about programming to an abstraction that is efficiently parallelizable.  Note that it&#039;s not actually a simple solution, because it sits atop a mountain of code.  It requires something like BigTables which requires something like GFS, which requires something like Chubby.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Simplification&lt;br /&gt;
* Interestingly large scale problems can be implemented with this&lt;br /&gt;
* Easy to program, powerful for certain classes, it scales like no ones business. &lt;br /&gt;
* Kind of empoweraged model&lt;br /&gt;
Programming to an abstraction that is efficiently parllel. We have learnt all about infrastructure until now. &lt;br /&gt;
Classic OS abstractions were about files.&lt;br /&gt;
&lt;br /&gt;
== Naiad ==&lt;br /&gt;
&lt;br /&gt;
Where MapReduce was suited for a specific family of solutions, Naiad tries to generalize the solution to apply parallelization to a much wider family.  Naiad supports MapReduce style solutions, but also many other solutions.  However, the tradeoff was simplicity.  It&#039;s like we took MapReduce and took away its low barrier to entry.  The idea is to create a constrained graph that can easily be parallelized.&lt;br /&gt;
* Complicated than Map Reduce&lt;br /&gt;
* Talks about Timely dataflow graphs &lt;br /&gt;
* Its all about Graph algorithms - Graph abstraction&lt;br /&gt;
* Restrictions on graphs so that they can be mapped to parllel computation&lt;br /&gt;
* How to fit anything to this model is a big question. &lt;br /&gt;
* More general than map reduce &lt;br /&gt;
* Not very useful.&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_21&amp;diff=18944</id>
		<title>DistOS 2014W Lecture 21</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_21&amp;diff=18944"/>
		<updated>2014-03-27T21:00:58Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Marking */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Presentation ==&lt;br /&gt;
&lt;br /&gt;
=== Marking ===&lt;br /&gt;
&lt;br /&gt;
* marked mostly on presentation, not content&lt;br /&gt;
* basically we want to communicate the basic structure of the paper, and do so in a way that isn&#039;t boring&lt;br /&gt;
&lt;br /&gt;
=== Test ===&lt;br /&gt;
&lt;br /&gt;
=== Content ===&lt;br /&gt;
&lt;br /&gt;
* concrete, not &amp;quot;head in the clouds&amp;quot;&lt;br /&gt;
* present the area&lt;br /&gt;
* compare and contrast the papers&lt;br /&gt;
* 10 minutes talk, 5 minutes feedback&lt;br /&gt;
* basic argument&lt;br /&gt;
* basic references&lt;br /&gt;
&lt;br /&gt;
=== Form ===&lt;br /&gt;
&lt;br /&gt;
* show the work we&#039;ve done on paper&lt;br /&gt;
* try to get feedback&lt;br /&gt;
* think of it as a rough draft&lt;br /&gt;
* try to get people to read the paper&lt;br /&gt;
* enthusiasm&lt;br /&gt;
* powerpoints are easier&lt;br /&gt;
* don&#039;t read slides&lt;br /&gt;
* no whole sentences on slides&lt;br /&gt;
* look at talks by Mark Shuttleworth&lt;br /&gt;
&lt;br /&gt;
== MapReduce ==&lt;br /&gt;
&lt;br /&gt;
A clever observation that a simple solution could solve most distributed problems.  It&#039;s all about programming to an abstraction that is efficiently parallelizable.  Note that it&#039;s not actually a simple solution, because it sits atop a mountain of code.  It requires something like BigTables which requires something like GFS, which requires something like Chubby.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Simplification&lt;br /&gt;
* Interestingly large scale problems can be implemented with this&lt;br /&gt;
* Easy to program, powerful for certain classes, it scales like no ones business. &lt;br /&gt;
* Kind of empoweraged model&lt;br /&gt;
Programming to an abstraction that is efficiently parllel. We have learnt all about infrastructure until now. &lt;br /&gt;
Classic OS abstractions were about files.&lt;br /&gt;
&lt;br /&gt;
== Naiad ==&lt;br /&gt;
&lt;br /&gt;
Where MapReduce was suited for a specific family of solutions, Naiad tries to generalize the solution to apply parallelization to a much wider family.  Naiad supports MapReduce style solutions, but also many other solutions.  However, the tradeoff was simplicity.  It&#039;s like we took MapReduce and took away its low barrier to entry.  The idea is to create a constrained graph that can easily be parallelized.&lt;br /&gt;
* Complicated than Map Reduce&lt;br /&gt;
* Talks about Timely dataflow graphs &lt;br /&gt;
* Its all about Graph algorithms - Graph abstraction&lt;br /&gt;
* Restrictions on graphs so that they can be mapped to parllel computation&lt;br /&gt;
* How to fit anything to this model is a big question. &lt;br /&gt;
* More general than map reduce &lt;br /&gt;
* Not very useful.&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_21&amp;diff=18936</id>
		<title>DistOS 2014W Lecture 21</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_21&amp;diff=18936"/>
		<updated>2014-03-27T15:21:51Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Naiad */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Presentation ==&lt;br /&gt;
&lt;br /&gt;
=== Marking ===&lt;br /&gt;
&lt;br /&gt;
* marked mostly on presentation, not content&lt;br /&gt;
* basically we want to communicate the basic structure of the paper, and do so in a way that isn&#039;t boring&lt;br /&gt;
&lt;br /&gt;
=== Content ===&lt;br /&gt;
&lt;br /&gt;
* concrete, not &amp;quot;head in the clouds&amp;quot;&lt;br /&gt;
* present the area&lt;br /&gt;
* compare and contrast the papers&lt;br /&gt;
* 10 minutes talk, 5 minutes feedback&lt;br /&gt;
* basic argument&lt;br /&gt;
* basic references&lt;br /&gt;
&lt;br /&gt;
=== Form ===&lt;br /&gt;
&lt;br /&gt;
* show the work we&#039;ve done on paper&lt;br /&gt;
* try to get feedback&lt;br /&gt;
* think of it as a rough draft&lt;br /&gt;
* try to get people to read the paper&lt;br /&gt;
* enthusiasm&lt;br /&gt;
* powerpoints are easier&lt;br /&gt;
* don&#039;t read slides&lt;br /&gt;
* no whole sentences on slides&lt;br /&gt;
* look at talks by Mark Shuttleworth&lt;br /&gt;
&lt;br /&gt;
== MapReduce ==&lt;br /&gt;
&lt;br /&gt;
A clever observation that a simple solution could solve most distributed problems.  It&#039;s all about programming to an abstraction that is efficiently parallelizable.  Note that it&#039;s not actually a simple solution, because it sits atop a mountain of code.  It requires something like BigTables which requires something like GFS, which requires something like Chubby.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Simplification&lt;br /&gt;
* Interestingly large scale problems can be implemented with this&lt;br /&gt;
* Easy to program, powerful for certain classes, it scales like no ones business. &lt;br /&gt;
* Kind of empoweraged model&lt;br /&gt;
Programming to an abstraction that is efficiently parllel. We have learnt all about infrastructure until now. &lt;br /&gt;
Classic OS abstractions were about files.&lt;br /&gt;
&lt;br /&gt;
== Naiad ==&lt;br /&gt;
&lt;br /&gt;
Where MapReduce was suited for a specific family of solutions, Naiad tries to generalize the solution to apply parallelization to a much wider family.  Naiad supports MapReduce style solutions, but also many other solutions.  However, the tradeoff was simplicity.  It&#039;s like we took MapReduce and took away its low barrier to entry.  The idea is to create a constrained graph that can easily be parallelized.&lt;br /&gt;
* Complicated than Map Reduce&lt;br /&gt;
* Talks about Timely dataflow graphs &lt;br /&gt;
* Its all about Graph algorithms - Graph abstraction&lt;br /&gt;
* Restrictions on graphs so that they can be mapped to parllel computation&lt;br /&gt;
* How to fit anything to this model is a big question. &lt;br /&gt;
* More general than map reduce &lt;br /&gt;
* Not very useful.&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_21&amp;diff=18935</id>
		<title>DistOS 2014W Lecture 21</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_21&amp;diff=18935"/>
		<updated>2014-03-27T15:21:37Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* MapReduce */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Presentation ==&lt;br /&gt;
&lt;br /&gt;
=== Marking ===&lt;br /&gt;
&lt;br /&gt;
* marked mostly on presentation, not content&lt;br /&gt;
* basically we want to communicate the basic structure of the paper, and do so in a way that isn&#039;t boring&lt;br /&gt;
&lt;br /&gt;
=== Content ===&lt;br /&gt;
&lt;br /&gt;
* concrete, not &amp;quot;head in the clouds&amp;quot;&lt;br /&gt;
* present the area&lt;br /&gt;
* compare and contrast the papers&lt;br /&gt;
* 10 minutes talk, 5 minutes feedback&lt;br /&gt;
* basic argument&lt;br /&gt;
* basic references&lt;br /&gt;
&lt;br /&gt;
=== Form ===&lt;br /&gt;
&lt;br /&gt;
* show the work we&#039;ve done on paper&lt;br /&gt;
* try to get feedback&lt;br /&gt;
* think of it as a rough draft&lt;br /&gt;
* try to get people to read the paper&lt;br /&gt;
* enthusiasm&lt;br /&gt;
* powerpoints are easier&lt;br /&gt;
* don&#039;t read slides&lt;br /&gt;
* no whole sentences on slides&lt;br /&gt;
* look at talks by Mark Shuttleworth&lt;br /&gt;
&lt;br /&gt;
== MapReduce ==&lt;br /&gt;
&lt;br /&gt;
A clever observation that a simple solution could solve most distributed problems.  It&#039;s all about programming to an abstraction that is efficiently parallelizable.  Note that it&#039;s not actually a simple solution, because it sits atop a mountain of code.  It requires something like BigTables which requires something like GFS, which requires something like Chubby.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Simplification&lt;br /&gt;
* Interestingly large scale problems can be implemented with this&lt;br /&gt;
* Easy to program, powerful for certain classes, it scales like no ones business. &lt;br /&gt;
* Kind of empoweraged model&lt;br /&gt;
Programming to an abstraction that is efficiently parllel. We have learnt all about infrastructure until now. &lt;br /&gt;
Classic OS abstractions were about files.&lt;br /&gt;
&lt;br /&gt;
== Naiad ==&lt;br /&gt;
&lt;br /&gt;
Where MapReduce was suited for a specific family of solutions, Naiad tries to generalize the solution to apply parallelization to a much wider family.  Naiad supports MapReduce style solutions, but also many other solutions.  However, the tradeoff was simplicity.  It&#039;s like we took MapReduce and took away its low barrier to entry.  The idea is to create a constrained graph that can easily be parallelized.&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=18916</id>
		<title>DistOS 2014W Lecture 20</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_20&amp;diff=18916"/>
		<updated>2014-03-25T15:28:06Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: /* Cassandra */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Cassandra ==&lt;br /&gt;
&lt;br /&gt;
Cassandra is essentially running a BigTable interface on top of a Dynamo infrastructure.  BigTable uses GFS&#039; built-in replication and Chubby for locking.  Cassandra uses gossip algorithms: [http://dl.acm.org/citation.cfm?id=1529983 Scuttlebutt].  Apache Zookeeper is used for distributed configuration.&lt;br /&gt;
&lt;br /&gt;
Casandra - &lt;br /&gt;
GFS type cluster which big table depends on &lt;br /&gt;
Lighter weight &lt;br /&gt;
All most of the readings are part of Apache&lt;br /&gt;
More designed for online updates for interactive lower latency &lt;br /&gt;
Once they write to disk they only read back&lt;br /&gt;
Scalable multi master database with no single point of failure&lt;br /&gt;
Reason for not giving out the complete detail on the table schema&lt;br /&gt;
Probably not just inbox search&lt;br /&gt;
All data in one row of a table &lt;br /&gt;
Its not a key-value store. Big blob of data. &lt;br /&gt;
Gossip based protocol - Scufflebutt&lt;br /&gt;
Fixed circular ring &lt;br /&gt;
Consistency issue not addressed at all. Does writes in an immutable way. Never change them. &lt;br /&gt;
&lt;br /&gt;
Discussions&lt;br /&gt;
&lt;br /&gt;
Athero GPL&lt;br /&gt;
Cassandra vs Big table&lt;br /&gt;
BigTable is not part of Hadoop. &lt;br /&gt;
BigTable &amp;quot; Funny thing on top of GFS&amp;quot;&lt;br /&gt;
History of HDFS&lt;br /&gt;
Hbase&lt;br /&gt;
Why two projects with same notion are supported?&lt;br /&gt;
Apache as a community. For any tool in CS particularly software tools, its actually important to have more &lt;br /&gt;
&lt;br /&gt;
than one good implementation. Only time it doesnt happen because of market realities. &lt;br /&gt;
Older style network protocol - token rings&lt;br /&gt;
What sort of computational systems avoid changing data? &lt;br /&gt;
Functional programming languages - &lt;br /&gt;
Whats different from classic c and c++. Tries to eliminate side effects. Functional there is no assignment.Data is just binded you associate a name with a value. Garbage collection. No mutation of data. &lt;br /&gt;
Systems talking about implementing functional like semantics.&lt;br /&gt;
&lt;br /&gt;
== Comet ==&lt;br /&gt;
&lt;br /&gt;
The major idea behind Comet is triggers/callbacks.  There is an extensive literature in extensible operating systems, basically adding code to the operating system to better suit my application.  &amp;quot;Generally, extensible systems suck.&amp;quot; -[[User:Soma]]&lt;br /&gt;
&lt;br /&gt;
[https://www.usenix.org/conference/osdi10/comet-active-distributed-key-value-store The presentation video of Comet]&lt;br /&gt;
&lt;br /&gt;
Cassandra&lt;br /&gt;
+	&lt;br /&gt;
Cassandra is essentially running a BigTable interface on top of a Dynamo infrastructure.  BigTable uses GFS&#039; built-in replication and Chubby for locking.  Cassandra uses gossip algorithms: [http://dl.acm.org/citation.cfm?id=1529983 Scuttlebutt].  Apache Zookeeper is used for distributed configuration.&lt;br /&gt;
 		 	&lt;br /&gt;
−	&lt;br /&gt;
Google developed its technology interbally and used for competitive advantage.&lt;br /&gt;
+	&lt;br /&gt;
== Comet ==&lt;br /&gt;
−	&lt;br /&gt;
Facebook developed its technology in open source manner.&lt;br /&gt;
+	&lt;br /&gt;
 		 	&lt;br /&gt;
−	&lt;br /&gt;
Gpl 3 you have to provide code with binary&lt;br /&gt;
+	&lt;br /&gt;
[https://www.usenix.org/conference/osdi10/comet-active-distributed-key-value-store The presentation video of Comet]&lt;br /&gt;
−	&lt;br /&gt;
In AGPL addtional service also be given with source code.&lt;br /&gt;
+	&lt;br /&gt;
−	&lt;br /&gt;
 &lt;br /&gt;
+	&lt;br /&gt;
−	&lt;br /&gt;
Bigtable needs gfs.cassandra depends on server&#039;s file system.Anil feels cassndra cluster is easy to setup.&lt;br /&gt;
+	&lt;br /&gt;
−	&lt;br /&gt;
 &lt;br /&gt;
+	&lt;br /&gt;
−	&lt;br /&gt;
Bigtable is designed for batch updates.cassandra is for hqbdling realtime stuff.&lt;br /&gt;
+	&lt;br /&gt;
−	&lt;br /&gt;
 &lt;br /&gt;
+	&lt;br /&gt;
−	&lt;br /&gt;
Schema design is explained in inbox example.it does does not give clarity about how table will look like.Anil thinks they store lot data with messages which makes table crappy.&lt;br /&gt;
+	&lt;br /&gt;
−	&lt;br /&gt;
 &lt;br /&gt;
+	&lt;br /&gt;
−	&lt;br /&gt;
Cassandra is design for high speed access and online operation.&lt;br /&gt;
+	&lt;br /&gt;
−	&lt;br /&gt;
 &lt;br /&gt;
+	&lt;br /&gt;
−	&lt;br /&gt;
Zookeeper is similar to chhuby&lt;br /&gt;
+	&lt;br /&gt;
−	&lt;br /&gt;
Zookeeper is for node level information&lt;br /&gt;
+	&lt;br /&gt;
−	&lt;br /&gt;
Gossip is more about key partitioning&lt;br /&gt;
+	&lt;br /&gt;
−	&lt;br /&gt;
 &lt;br /&gt;
+	&lt;br /&gt;
−	&lt;br /&gt;
Zookeeper is for configuration of new node.&lt;br /&gt;
+	&lt;br /&gt;
−	&lt;br /&gt;
 &lt;br /&gt;
+	&lt;br /&gt;
−	&lt;br /&gt;
It writes in immutable way like functional programming.there is no assignment in functional programming.&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_15&amp;diff=18738</id>
		<title>DistOS 2014W Lecture 15</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_15&amp;diff=18738"/>
		<updated>2014-03-06T20:57:51Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: Designing Exercise&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Designing Exercise&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Can we do any kind of distributed system without crypto? We can&#039;t trust crypto...&lt;br /&gt;
&lt;br /&gt;
What are the main features we need to consider for such a system ? &lt;br /&gt;
*Limited Sharing&lt;br /&gt;
*Integrity&lt;br /&gt;
*Availability&lt;br /&gt;
&lt;br /&gt;
Perhaps probabilistically...&lt;br /&gt;
&lt;br /&gt;
Want to be able to put data in, have it distributed, and be able to get it out on some other machine. This kind of sharing would need identification or authentication process.&lt;br /&gt;
&lt;br /&gt;
Availability: &amp;quot;distribute the crap out of it&amp;quot;, doesn&#039;t need crypto. No corruption of data. &lt;br /&gt;
&lt;br /&gt;
Integrity: hashing, but we assume hashes can be forged. If we want to know that we got the same file, then simply send each other the file and compare.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note on Project Proposal&#039;&#039;&#039; &lt;br /&gt;
* Date has been extended until next week. As Prof said some of the proposals are not completely up to mark.&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_6&amp;diff=18478</id>
		<title>DistOS 2014W Lecture 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_6&amp;diff=18478"/>
		<updated>2014-01-23T18:12:43Z</updated>

		<summary type="html">&lt;p&gt;Nelaturuk: Discussion on &amp;quot;The Early Web&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== Group Discussion on &amp;quot;The Early Web ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Questions to discuss:&lt;br /&gt;
&lt;br /&gt;
: 1. How do you think the web would have been if not like the present way? &lt;br /&gt;
: 2. What kind of infrastructure changes would you like to make? &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Group 1&#039;&#039;&#039;&lt;br /&gt;
: Relatively satisfied with the present structure of the web some changes suggested are in the below areas: &lt;br /&gt;
* Make use of the greater potential of Protocols &lt;br /&gt;
* More communication and interaction capabilities.&lt;br /&gt;
* Implementation changes in the present payment method systems. Example usage of &amp;quot;Micro-computation&amp;quot; - a discussion we would get back to in future classes. Also, Cryptographic currencies.&lt;br /&gt;
* Augmented reality.&lt;br /&gt;
* More towards individual privacy. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Group 2&#039;&#039;&#039;&lt;br /&gt;
* Information to be classified in detail &lt;br /&gt;
** Organize things on web. Ex: Yahoo indexers&lt;br /&gt;
** Suggestion for the need of Universal Decimal System an idea by Paul Otlet to be considered. &lt;br /&gt;
** In the end it comes to semantic web&lt;br /&gt;
* Information redundancy&lt;br /&gt;
* Information verification&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Group 3&#039;&#039;&#039;&lt;br /&gt;
* What we want to keep &lt;br /&gt;
** Linking mechanisms&lt;br /&gt;
** Minimum permissions to publish&lt;br /&gt;
* What we don&#039;t like&lt;br /&gt;
** Relying on one source for document &lt;br /&gt;
** Privacy links for security&lt;br /&gt;
* Proposal &lt;br /&gt;
** Peer-peer to distributed mechanisms for documenting&lt;br /&gt;
** Reverse links with caching - distributed cache&lt;br /&gt;
** More availability for user - what happens when system fails? &lt;br /&gt;
** Key management to be considered - Is it good to have centralized or distributed mechanism? &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Group 4&#039;&#039;&#039;&lt;br /&gt;
* An idea of web searching for us &lt;br /&gt;
* A suggestion of a different web if it would have been implemented by &amp;quot;AI&amp;quot; people&lt;br /&gt;
** AI programs searching for data - A notion already being implemented by Google slowly.&lt;br /&gt;
* Generate report forums&lt;br /&gt;
* HTML equivalent is inspired by the AI communication&lt;br /&gt;
* Higher semantics apart from just indexing the data&lt;br /&gt;
** Problem : &amp;quot;How to bridge the semantic gap?&amp;quot;&lt;br /&gt;
** Search for more data patterns&lt;/div&gt;</summary>
		<author><name>Nelaturuk</name></author>
	</entry>
</feed>