DistOS 2015W Session 4: Difference between revisions
Line 31: | Line 31: | ||
== Pool processors == | == Pool processors == | ||
Pool processors are group of CPUs that are dynamically allocated as user needs. When a program is executed, any of the available processor run. | |||
=== Thread Management: === | === Thread Management: === | ||
Line 40: | Line 41: | ||
* Buttlet thread the mutex | * Buttlet thread the mutex | ||
* The careful reader may have noticed that user process can pull 813kbytes/sec | * The careful reader may have noticed that user process can pull 813kbytes/sec | ||
= The V Distributed System = | = The V Distributed System = |
Revision as of 03:32, 11 February 2015
Andrew File System
AFS (Andrew File System) was set up as a direct response to NFS. Essentially universities found issues when they tried to scale NFS in a way that would allow them to share files amongst their staff effectively. AFS was more scalable than NFS because read-write operations happened locally before they were committed to the server (data store).
Since AFS copied files locally when they were opened and only sent the data back when they were closed, all operations during that time are very fast and do need the network. NFS works with files remotely, so there is no data to transfer when opening/closing the file, making those operations instant.
There are several problems with this design, however.
- The local system must have enough space to temporarily store the file.
- Opening and closing the files requires a lot of bandwidth for large files. To read even a single byte, the entire file must be retrieved (later versions remedied this).
- If the close operation fails, the system will not have the updated version of the file. Many programs don't even check the return value of the close operation, giving users the false impression that everything went well.
Given all this, AFS was suitable for working with small files, not large ones, limiting its usefulness. It is also notoriously annoying to set up as it is geared towards university-sized networks, further limiting its success.
Amoeba Operating System
Capablities:
- Pointer to the object
- Capability assigning right to perform to some operation to the object ticket
- Communicate wide area network
- a kind of ticket or key that allows the holder of the capa- bility to perform some (not neces- sarily all)
- Each user process owns some collection of capabilities, which together define the set of objects it may access and the types of operations that my ne performed on each
- After the server has performed the operation, it sends back a reply message that unblocks the client
- Sending messages, blocking and accepting forms the remote procedure call that can be encapsulate using to make entire remote operation look like local procedure
- Second field: used by the sever to identify which of its objects is being addressed server port and object number identify object which operation to performed
- Generates 48-bit random number
- The third field is the right field which contains a bit map telling which operation the holder of the capability may performed
- X11 Window management
Unique features
Pool processors
Pool processors are group of CPUs that are dynamically allocated as user needs. When a program is executed, any of the available processor run.
Thread Management:
- Same process have multiple thread and each process has its own registered counter and stack
- Behave like process
- It can synchronized using mutex semaphore
- File: Multiple thread,
- Blocked when there's multiple threads
- Buttlet thread the mutex
- The careful reader may have noticed that user process can pull 813kbytes/sec
The V Distributed System
- First tent in V design: High Performance communication is the most critical facility for distributed systems.
- Second; The protocols, not the software, define the system.
- Third; a relatively small operating system kernel can implement the basic protocols and services providing a simple network-transparent process, address space & communication model.
Ideas that significantly affected the design
- Shared Memory.
- Dealing with group of entities same as they deal w/individual entities.
- Efficient file caching mechanism using the virtual memory caching mechanism.
Design Decisions
- Designed for a cluster of workstations with high speed network access ( only really supports LAN ).
- Abstract the physical architecture of the participating workstations, by defining common protocols providing well-defined interfaces.
V was run on LAN, and its developers developed a really fast IPC protocol which allowed for it to be a fasted distributed operating system in a small geographic area. Aside from the IPC protocols, V also implemented RPC calls in the background.
V uses the strong consistency model. This model can cause issues with concurrency because in V files are a memory space. Thus two different users accessing the same file and in fact accessing the same memory location. This could result in issues unless there is an effective implementation to deal with multiple versions, etc.
VMTP protocol was used for communication. It supports request-respond behavior. Besides, it provides transparency, group communication facility and flow control. It is pretty much like TCP.