DistOS 2023W 2023-02-06: Difference between revisions
Created page with "==Discussion questions== * Discuss what you think was interesting about Sprite relative to past systems. What was new? What was old? * How does AFS compare to NFS, in terms of their design, implementation, and ambition? * What is the role of UNIX in the design and implementation of Sprite and AFS? * What else came to mind when reading and discussing these papers?" |
|||
| (One intermediate revision by the same user not shown) | |||
| Line 5: | Line 5: | ||
* What is the role of UNIX in the design and implementation of Sprite and AFS? | * What is the role of UNIX in the design and implementation of Sprite and AFS? | ||
* What else came to mind when reading and discussing these papers? | * What else came to mind when reading and discussing these papers? | ||
* What affects scale in AFS? Sprite? | |||
* What sort of workloads are these systems designed for? | |||
==Notes== | |||
<pre> | |||
Sprite and AFS | |||
-------------- | |||
- Sprite is similar to LOCUS in high-level design | |||
- but Sprite has optimized for performance in various ways | |||
- when migrating processes, only copy pages that you need | |||
(if executable is already on other workstation in memory can just use those pages) | |||
- caching on both client and server | |||
- caching of files in memory (we have more RAM) | |||
- efficient distribution of filesystem namespace | |||
- still not running on many workstations (100 workstations, 6 servers) | |||
- note the tradeoff between RAM and disks | |||
- do as much with RAM as possible, just use disks for persistence | |||
(very much what we do today) | |||
- Sprite seems very familiar because its caching architecture is very much how we do things today | |||
AFS | |||
- Andrew is for Andrew Carnegie, this came out of CMU (Carnegie Mellon Univ) | |||
- AFS was trying for web scale before the web | |||
- global filesystem | |||
- every organization was a "cell" and AFS allowed for inter-cell communication | |||
- but authentication normally didn't work outside a cell so you | |||
mostly wouldn't see anything except public files | |||
- took security seriously, integrated with Kerberos for authentication | |||
- takes a very different approach to accessing files | |||
- workstation assumed to have a local disk | |||
- so on open, a file would be copied to the local disk | |||
- all work on the file would happen locally | |||
- on close, file would be copied back to the server | |||
- quirk of the AFS model - close could fail! | |||
- do you check close for failure normally? | |||
In AFS, the servers are very different from the clients | |||
- complex Vice and Venus setup, for managing data blocks and metadata separately | |||
- designed for large installations | |||
AFS workstations couldn't work disconnected, but later systems tried to fix this (Coda, which didn't get that widely used) | |||
</pre> | |||
Latest revision as of 17:00, 6 February 2023
Discussion questions
- Discuss what you think was interesting about Sprite relative to past systems. What was new? What was old?
- How does AFS compare to NFS, in terms of their design, implementation, and ambition?
- What is the role of UNIX in the design and implementation of Sprite and AFS?
- What else came to mind when reading and discussing these papers?
- What affects scale in AFS? Sprite?
- What sort of workloads are these systems designed for?
Notes
Sprite and AFS
--------------
- Sprite is similar to LOCUS in high-level design
- but Sprite has optimized for performance in various ways
- when migrating processes, only copy pages that you need
(if executable is already on other workstation in memory can just use those pages)
- caching on both client and server
- caching of files in memory (we have more RAM)
- efficient distribution of filesystem namespace
- still not running on many workstations (100 workstations, 6 servers)
- note the tradeoff between RAM and disks
- do as much with RAM as possible, just use disks for persistence
(very much what we do today)
- Sprite seems very familiar because its caching architecture is very much how we do things today
AFS
- Andrew is for Andrew Carnegie, this came out of CMU (Carnegie Mellon Univ)
- AFS was trying for web scale before the web
- global filesystem
- every organization was a "cell" and AFS allowed for inter-cell communication
- but authentication normally didn't work outside a cell so you
mostly wouldn't see anything except public files
- took security seriously, integrated with Kerberos for authentication
- takes a very different approach to accessing files
- workstation assumed to have a local disk
- so on open, a file would be copied to the local disk
- all work on the file would happen locally
- on close, file would be copied back to the server
- quirk of the AFS model - close could fail!
- do you check close for failure normally?
In AFS, the servers are very different from the clients
- complex Vice and Venus setup, for managing data blocks and metadata separately
- designed for large installations
AFS workstations couldn't work disconnected, but later systems tried to fix this (Coda, which didn't get that widely used)