NFS, AFS, Sprite FS

From Soma-notes
Revision as of 16:29, 23 October 2008 by Ndickson (talk | contribs) (Finished posting notes on NFS/AFS/SpriteFS/DSM)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Readings

Questions

  1. What were the key design goals of NFS, AFS, and Sprite's FS?
  2. How well did they achieve their goals?
  3. What are their limitations?
  4. How suitable are these filesystems in modern small networks? Enterprise networks? Internet-scale applications? Why?

Notes from this and the previous week

Two main things to consider about new ideas for systems:

  • mental models
  • controls

Are these solving some problem better than existing things, and are they better enough to overcome the disadvantages?

Bell Labs

  • tried to turn everything into a file
  • wanted to make OS easier to use for programmers of their systems
  • "simple" common API/protocol for I/O & IPC resources; "radical network transparency"
  • focus on process communication
  • portability
  • code reuse (of their own code), but ignored legacy code problem
  • "laziness" & "generality"
  • move from centralized to distributed computing to make use of the individual machines
  • resource utilization / efficiency
  • with Plan 9, you don't know when you're using a network (network transparency), because everything is just a file
    • is the overhead consistent? not always: this means that one can't predict how "file" accesses will perform in the field
    • reliability of "file" access relies on reliability of network and remote machines

DSM

  • tried to turn everything into RAM
  • wanted to make programming easier by not having to deal with sending data
  • resource utilization / efficiency (while still being DSM, as much as those conflict)
  • focus on a class of problems (not as general)
  • an abstraction of system resources ("is it the right abstraction?")
  • with DSM, you don't know when you're using a network (network transparency), because everything is just memory
    • is the overhead consistent? no: this means that one can't predict how programs will perform in the field
    • reliability of programs relies on reliability of network and remote machines

With hardware-based DSM: processor doesn’t know what pages do and don’t need to be secure. It's also not a portable system design.