Operating Systems 2014F Lecture 10

From Soma-notes
Revision as of 08:55, 8 October 2014 by Afry (talk | contribs) (Created page with "By using base 16 vs. base 10 - yes the alphabet is larger. when you talk about hex vs. decimal, you are changing how we read it. There's something more correct about using hex...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

By using base 16 vs. base 10 - yes the alphabet is larger. when you talk about hex vs. decimal, you are changing how we read it. There's something more correct about using hex vs. base 10. A Hexadecimal digit represents 4 bits. How many bits does base 10 represent? (somewhere between 8 and 16 - a fractional bit) it's messy. Decimal is not the native base of a computer. Base 16 maps perfectly to base 2. That's why you also see octal - 3 bits vs. 4 bits. (but a reduced range), when you use hexadecimal it represents 4 bits, and is a bit cleaner.

When you say offset is a power of two, the page size is also a power of two. the storage used to encode the page size determines the range of the offset. It is a specific byte in a page. What is the range of bytes that can go in a page. Classic x86

what is the range of offsets of a 4 k page? The offsets are 0 - 4095. Another thing that you will need for the assignment, it's a question that refers to Peterson's algorithm. It doesn't work on modern machines. The big idea is the memory hierarchy. Key assumption: when one processor writes to memory the other processors will see that write, that value. Is that true in modern systems? No.

Concurrency Mechanisms:

locks condition variables Semaphores

you build all of these using the same mechanisms. Let's you check a value and change it in one atomic operation. The basic functionality of the hardware. The semantics are different in these higher level constructs. When do you use what? What are the purpose of these things? there's huge amounts of overlap. what you are talking about with these things, is that you have some sort of storage that has atomic operations so you can check it's value adn change it, all at one time (so you don't have race conditions when you manipulate the value) then you have a queue of threads.

The difference between these, is related to how you deal with the interaction between the storage and the queue of threads. IF you have a lock, grab lock, and release the lock, does releasing the lock always succeed? Yes, if you have the lock, and you release it, it always succeeds. grab lock can block. when we talk about blocking, it's going to sit there and wait for the condition to happen, nothing is going to happen to that thread until that condition is met. You often have a try lock, try lock means check to see if the lock is grabbable or not. it will grab the lock if it's available, but if it's not available it will return, it's non-blocking, because maybe you don't want to sit there and block. Most of the time you will try to grab the lock and if the lock fails, it will wait until grabbing the lock succeeds.