DistOS 2014W Lecture 8: Difference between revisions
Removed unused group 4. Formatted discussion |
|||
(6 intermediate revisions by one other user not shown) | |||
Line 4: | Line 4: | ||
* [http://homeostasis.scs.carleton.ca/~soma/distos/2008-02-11/howard-afs.pdf John H. Howard et al., "Scale and Performance in a Distributed File System" (1988)] | * [http://homeostasis.scs.carleton.ca/~soma/distos/2008-02-11/howard-afs.pdf John H. Howard et al., "Scale and Performance in a Distributed File System" (1988)] | ||
==Group 1 | ==NFS== | ||
Group 1: | |||
1) per operation traffic. | |||
2) rpc based. Easy with which to program but a very [http://www.joelonsoftware.com/articles/LeakyAbstractions.html leaky abstraction]. | |||
2) | 3) unreliable | ||
Group 2: | |||
1) designed to share disks over a network, not files | |||
3) | 2) more UNIX like. They tried to maintain unix file semantics on the client and server side. | ||
3) portable. It was meant to work (as a server) across many FS types. | |||
4) used UDP: if request dropped, just request again. | |||
5) it is not minimize network traffic. | |||
6) used VNODE, VFS as transparent interfaces to local disks. | |||
7) not have much hardware equipment | |||
8) later versions took on features of AFS | |||
9) stateless protocol conflicts with files being stateful by nature. | |||
Group 3: | |||
1) cache assumption invalid. | |||
2) no dedicated locking mechanism. They couldn't decide on which locking strategy to use, so they left it up to the users of NFS to use their own separate locking service. | |||
3) bad security | |||
Other: | |||
* Client mounts full FS. No common namespace. | |||
* Hostname lookup and address binding happens at mount | |||
==AFS== | |||
Group 1 | |||
1) design for 5000 to 10000 clients | |||
2) high integrity. | |||
Group 2 | |||
1) designed to share files over a network, not disks | 1) designed to share files over a network, not disks. It is one FS. | ||
2) better scalability | 2) better scalability | ||
3) better security. | 3) better security (Kerberos). | ||
4) minimize network traffic. | 4) minimize network traffic. | ||
Line 60: | Line 71: | ||
8) inode concept replaced with fid | 8) inode concept replaced with fid | ||
Group 3 | |||
1) cache assumption valid | 1) cache assumption valid | ||
Line 79: | Line 79: | ||
3) good security. | 3) good security. | ||
Other: | |||
* Caches full files locally on open. Sends diffs on close. | |||
==Class Discussion:== | ==Class Discussion:== | ||
NFS and AFS took substantially different approaches to the many problems they faced; while we consider AFS to have made generally better choices in this respect, it was not widely adopted because it was complex and difficult to setup/administer/maintain. NFS however, was comparatively simple. Its protocol and API were relatively stateless (thus it used UDP) and it shared information at the file level rather than the block level. It was also built on RPC, which was convenient to program in but was (as we have already discussed) a bad abstraction since it hid the inherent flakiness of the network. This use of RPC led to security and reliability problems with NFS. | |||
AFS took a more thorough approach to figuring out coherent consistency guarantees and how to implement them efficiently. The AFS designers considered the network as a bottle neck and tried to reduce the amount of chatter over network by making heavy use of caching. The 'open' and 'close' operations in AFS were critical, assuming importance similar in proportion to 'commit' operations in a well-designed database system. The security model of AFS was also interesting in that rather than going for the UNIX access list based implementation AFS used a single sign on system based on Kerberos. | |||
Latest revision as of 17:33, 23 April 2014
NFS and AFS (Jan 30)
- Russel Sandberg et al., "Design and Implementation of the Sun Network Filesystem" (1985)
- John H. Howard et al., "Scale and Performance in a Distributed File System" (1988)
NFS
Group 1:
1) per operation traffic.
2) rpc based. Easy with which to program but a very leaky abstraction.
3) unreliable
Group 2:
1) designed to share disks over a network, not files
2) more UNIX like. They tried to maintain unix file semantics on the client and server side.
3) portable. It was meant to work (as a server) across many FS types.
4) used UDP: if request dropped, just request again.
5) it is not minimize network traffic.
6) used VNODE, VFS as transparent interfaces to local disks.
7) not have much hardware equipment
8) later versions took on features of AFS
9) stateless protocol conflicts with files being stateful by nature.
Group 3:
1) cache assumption invalid.
2) no dedicated locking mechanism. They couldn't decide on which locking strategy to use, so they left it up to the users of NFS to use their own separate locking service.
3) bad security
Other:
- Client mounts full FS. No common namespace.
- Hostname lookup and address binding happens at mount
AFS
Group 1
1) design for 5000 to 10000 clients
2) high integrity.
Group 2
1) designed to share files over a network, not disks. It is one FS.
2) better scalability
3) better security (Kerberos).
4) minimize network traffic.
5) less UNIX like
6) plugin authentication
7) needs more kernel storage due to complex commands
8) inode concept replaced with fid
Group 3
1) cache assumption valid
2) locking
3) good security.
Other:
- Caches full files locally on open. Sends diffs on close.
Class Discussion:
NFS and AFS took substantially different approaches to the many problems they faced; while we consider AFS to have made generally better choices in this respect, it was not widely adopted because it was complex and difficult to setup/administer/maintain. NFS however, was comparatively simple. Its protocol and API were relatively stateless (thus it used UDP) and it shared information at the file level rather than the block level. It was also built on RPC, which was convenient to program in but was (as we have already discussed) a bad abstraction since it hid the inherent flakiness of the network. This use of RPC led to security and reliability problems with NFS.
AFS took a more thorough approach to figuring out coherent consistency guarantees and how to implement them efficiently. The AFS designers considered the network as a bottle neck and tried to reduce the amount of chatter over network by making heavy use of caching. The 'open' and 'close' operations in AFS were critical, assuming importance similar in proportion to 'commit' operations in a well-designed database system. The security model of AFS was also interesting in that rather than going for the UNIX access list based implementation AFS used a single sign on system based on Kerberos.