DistOS 2015W Session 4: Difference between revisions
m →Supported architectures: Changed HTML to wiki markup |
|||
Line 2: | Line 2: | ||
AFS (Andrew File System) was set up as a direct response to NFS. Essentially universities found issues when they tried to scale NFS in a way that would allow them to share files amongst their staff effectively. AFS was more scalable than NFS because read-write operations happened locally before they were committed to the server (data store). | AFS (Andrew File System) was set up as a direct response to NFS. Essentially universities found issues when they tried to scale NFS in a way that would allow them to share files amongst their staff effectively. AFS was more scalable than NFS because read-write operations happened locally before they were committed to the server (data store). | ||
Since AFS | Since AFS copies files locally when they were opened and only sends the data back when they are closed, all operations between opening and closing the file are very fast and do not need to access the network. NFS works with files remotely, so there is no data to transfer when opening/closing the file, making those operations instant. | ||
There are several problems with this design, however | There are several problems with this design, however: | ||
* The local system must have enough space to temporarily store the file. | * The local system must have enough space to temporarily store the file. | ||
* Opening and closing the files requires a lot of bandwidth for large files. To read even a single byte, the entire file must be retrieved (later versions remedied this). | * Opening and closing the files requires a lot of bandwidth for large files. To read even a single byte, the entire file must be retrieved (later versions remedied this). | ||
* If the close operation fails, the system will not have the updated version of the file. Many programs don't even check the return value of the close operation, giving users the false impression that everything went well. | * If the close operation fails, the system will not have the updated version of the file. Many programs are designed around local filesystems, and therefore don't even check the return value of the close operation (as this is unlikely to fail on a local FS), giving users the false impression that everything went well. | ||
Given all this, AFS was suitable for working with small files, not large ones, limiting its usefulness. It is also notoriously annoying to set up as it is geared towards university-sized networks, further limiting its success. | Given all this, AFS was suitable for working with small files, not large ones, limiting its usefulness. It is also notoriously annoying to set up as it is geared towards university-sized networks, further limiting its success. | ||
= Amoeba Operating System = | = Amoeba Operating System = |
Revision as of 04:45, 6 April 2015
Andrew File System
AFS (Andrew File System) was set up as a direct response to NFS. Essentially universities found issues when they tried to scale NFS in a way that would allow them to share files amongst their staff effectively. AFS was more scalable than NFS because read-write operations happened locally before they were committed to the server (data store).
Since AFS copies files locally when they were opened and only sends the data back when they are closed, all operations between opening and closing the file are very fast and do not need to access the network. NFS works with files remotely, so there is no data to transfer when opening/closing the file, making those operations instant.
There are several problems with this design, however:
- The local system must have enough space to temporarily store the file.
- Opening and closing the files requires a lot of bandwidth for large files. To read even a single byte, the entire file must be retrieved (later versions remedied this).
- If the close operation fails, the system will not have the updated version of the file. Many programs are designed around local filesystems, and therefore don't even check the return value of the close operation (as this is unlikely to fail on a local FS), giving users the false impression that everything went well.
Given all this, AFS was suitable for working with small files, not large ones, limiting its usefulness. It is also notoriously annoying to set up as it is geared towards university-sized networks, further limiting its success.
Amoeba Operating System
Capablities:
- Pointer to the object
- Capability assigning right to perform to some operation to the object ticket
- Communicate wide area network
- a kind of ticket or key that allows the holder of the capa- bility to perform some (not neces- sarily all)
- Each user process owns some collection of capabilities, which together define the set of objects it may access and the types of operations that my ne performed on each
- After the server has performed the operation, it sends back a reply message that unblocks the client
- Sending messages, blocking and accepting forms the remote procedure call that can be encapsulate using to make entire remote operation look like local procedure
- Second field: used by the sever to identify which of its objects is being addressed server port and object number identify object which operation to performed
- Generates 48-bit random number
- The third field is the right field which contains a bit map telling which operation the holder of the capability may performed
- X11 Window management
Thread Management:
- Same process have multiple thread and each process has its own registered counter and stack
- Behave like process
- It can synchronized using mutex semaphore
- File: Multiple thread,
- Blocked when there's multiple threads
- Buttlet thread the mutex
- The careful reader may have noticed that user process can pull 813kbytes/sec
Unique features
Pool processors
Pool processors are group of CPUs that are dynamically allocated as user needs. When a program is executed, any of the available processor run.
Supported architectures
Many different processor architectures are supported including:
- i80386 (Pentium)
- 68K
- SPARC
The V Distributed System
- First tent in V design: High Performance communication is the most critical facility for distributed systems.
- Second; The protocols, not the software, define the system.
- Third; a relatively small operating system kernel can implement the basic protocols and services providing a simple network-transparent process, address space & communication model.
Ideas that significantly affected the design
- Shared Memory.
- Dealing with group of entities same as they deal w/individual entities.
- Efficient file caching mechanism using the virtual memory caching mechanism.
Design Decisions
- Designed for a cluster of workstations with high speed network access ( only really supports LAN ).
- Abstract the physical architecture of the participating workstations, by defining common protocols providing well-defined interfaces.
V was run on LAN, and its developers developed a really fast IPC protocol which allowed for it to be a fasted distributed operating system in a small geographic area. Aside from the IPC protocols, V also implemented RPC calls in the background.
V uses the strong consistency model. This model can cause issues with concurrency because in V files are a memory space. Thus two different users accessing the same file and in fact accessing the same memory location. This could result in issues unless there is an effective implementation to deal with multiple versions, etc.
VMTP protocol was used for communication. It supports request-respond behavior. Besides, it provides transparency, group communication facility and flow control. It is pretty much like TCP.