<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/index.php?action=history&amp;feed=atom&amp;title=DistOS_2023W_2023-02-01</id>
	<title>DistOS 2023W 2023-02-01 - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/index.php?action=history&amp;feed=atom&amp;title=DistOS_2023W_2023-02-01"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2023W_2023-02-01&amp;action=history"/>
	<updated>2026-04-08T03:25:31Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2023W_2023-02-01&amp;diff=24331&amp;oldid=prev</id>
		<title>Soma: Created page with &quot;==Notes==  &lt;pre&gt; LOCUS &amp; NFS notes -----------------  NFS: file servers &amp; file clients (few servers, many clients)  - files live on the servers  LOCUS: every computer has files, not all files are on all computers  - so each computer could be a server or a client - or just have local access  could have replicas but logically should act like one file  - so replicas have to by synchronized  - did centralized sync via a designated host - all updates would go    there, and th...&quot;</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2023W_2023-02-01&amp;diff=24331&amp;oldid=prev"/>
		<updated>2023-02-01T19:11:04Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;==Notes==  &amp;lt;pre&amp;gt; LOCUS &amp;amp; NFS notes -----------------  NFS: file servers &amp;amp; file clients (few servers, many clients)  - files live on the servers  LOCUS: every computer has files, not all files are on all computers  - so each computer could be a server or a client - or just have local access  could have replicas but logically should act like one file  - so replicas have to by synchronized  - did centralized sync via a designated host - all updates would go    there, and th...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;==Notes==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
LOCUS &amp;amp; NFS notes&lt;br /&gt;
-----------------&lt;br /&gt;
&lt;br /&gt;
NFS: file servers &amp;amp; file clients (few servers, many clients)&lt;br /&gt;
 - files live on the servers&lt;br /&gt;
&lt;br /&gt;
LOCUS: every computer has files, not all files are on all computers&lt;br /&gt;
 - so each computer could be a server or a client - or just have local access&lt;br /&gt;
&lt;br /&gt;
could have replicas but logically should act like one file&lt;br /&gt;
 - so replicas have to by synchronized&lt;br /&gt;
 - did centralized sync via a designated host - all updates would go&lt;br /&gt;
   there, and then it would distributed the updates to the replicas&lt;br /&gt;
&lt;br /&gt;
Is LOCUS very fault tolerant?&lt;br /&gt;
&lt;br /&gt;
files could only be as big as any disk&lt;br /&gt;
 - really, idea was small files, not large ones&lt;br /&gt;
&lt;br /&gt;
Partitioning was a concern, they have recovery methods, but really individual hosts going down would not lead to good performance&lt;br /&gt;
 - would try to recover, but fundamentally fragile&lt;br /&gt;
&lt;br /&gt;
LOCUS tried for transparency, but faults would break that model fast&lt;br /&gt;
NFS did too, mostly worked, except with faults or if you unlinked open files&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Soma</name></author>
	</entry>
</feed>