Operating Systems 2017F Lecture 10: Difference between revisions

From Soma-notes
CindyYu (talk | contribs)
Blanked the page
No edit summary
Line 1: Line 1:
==Video==


Video from the lecture given on October 10, 2017 [http://homeostasis.scs.carleton.ca/~soma/os-2017f/lectures/comp3000-2017f-lec10-10Oct2017.mp4 is now available].  (The audio worked!)
==Notes==
===In-Class===
<pre>
Lecture 10
----------
Concurrency
- more than one thing happening at the same time
challenge with concurrency is keeping shared state consistent
* files that are written to by multiple processes
* two or more processes communicating
* multiple threads within a process
concurrency leads to race conditions
- order of interleaved actions affects output
- ex. did process A or B modify the file first?
We want deterministic behavior from concurrent computations
To get consistent computation, we need to coordinate
- but you can't interact via regular variables
Regular variables are subject to the memory hierarchy
- variables are stored in registers, L1 cache, L2 cache, L3 cache, main memory
- can be deeper in some contexts
- so what if coordination variable is in L1 cache for one thread and in main memory for another?
- hardware maintains consistency...eventually
mechanisms for coordinating concurrent computations must bypass the memory hierachy
Instead use
- filesystem locking primitives (kernel)
- special machine language instructions
Operating systems - kernels in particular - are highly concurrent
- makes them painful to code
- have to always worry about having exclusive (or safe) access to data, resources
Abstractions for managing concurrency
- atomic variables guarantee strict ordering of operations
Example
Variable X:  ***** <- range of memory locations
To modify X, you...
  - load X into a register
  - modify the register
  - save the register to X
What if two cores try to modify X at the same time?
  - both could have X in a register at the same time
X is 5
core A increments by 1
core B decrements by 1
What's X after both have run?
If B changes X but doesn't see A's change...you get 4
If A changes X but doesn't see B's change...you get 6
For X to be atomic, ^^^ shouldn't be possible
- has to make sure only one core accesses X at a time
Atomic variables are typically used to build semaphores...
But what we really want is mutual exclusion
- only one thread/process/CPU can access the resource at a time
Want mutual exclusion for larger resources, e.g. data structures, devices
Higher-level languages directly support mutual exclusion, e.g.
Java synchronized methods
Semaphores are an analogy to a railroad control mechanism
- only one train on the track!
When a train enters the shared track
- grab the semaphore (increment atomic variable)
- if not available, wait (check first to see if it is 0)
When a train leaves the shared track
- release the semaphore (decrement atomic variable)
Lesson: don't implement semaphores yourself, please!
In the Linux kernel
- locks using semaphores of various types are associated with all kinds
  of data structures
  e.g. inodes
When you can't get the lock, what do you do?
- sleep (tell the scheduler to wake you up when the lock is available)
- deal with not getting the resource
- spin (do a null loop)
If the wait time is short, spinning is the fastest thing you can do
Where possible, everything should be exclusive, minimize sharing
But in userspace...
- shared memory...NOOOOOOO
    - unless you use atomic variable, or something higher level
      like a semaphore
- messages
    - series of messages can be used to synchronize state
3-way handshakes, e.g. opening a TCP connection
- SYN (synchronize) client->server
- SYN-ACK (acknowledge SYN) server->client
- ACK (acknowledge) client->server
Can send messages between processes
- signals!
- other ways too
concurrency pattern: producer/consumer
* two processes P and C
* P "makes" things, C "consumes" them
* want to coordinate P and C
  - P shouldn't produce too much (C gets too far behind)
  - C shouldn't waste time waiting for P
* shared circular buffer for storing output of P and input of C
* when buffer is empty, C should sleep
* when buffer is full, P should sleep
</pre>

Revision as of 19:40, 10 October 2017

Video

Video from the lecture given on October 10, 2017 is now available. (The audio worked!)

Notes

In-Class

Lecture 10
----------

Concurrency
 - more than one thing happening at the same time

challenge with concurrency is keeping shared state consistent
 * files that are written to by multiple processes
 * two or more processes communicating
 * multiple threads within a process

concurrency leads to race conditions
 - order of interleaved actions affects output
 - ex. did process A or B modify the file first?

We want deterministic behavior from concurrent computations

To get consistent computation, we need to coordinate
 - but you can't interact via regular variables

Regular variables are subject to the memory hierarchy
 - variables are stored in registers, L1 cache, L2 cache, L3 cache, main memory
 - can be deeper in some contexts
 - so what if coordination variable is in L1 cache for one thread and in main memory for another?
 - hardware maintains consistency...eventually

mechanisms for coordinating concurrent computations must bypass the memory hierachy

Instead use
 - filesystem locking primitives (kernel)
 - special machine language instructions

Operating systems - kernels in particular - are highly concurrent
 - makes them painful to code
 - have to always worry about having exclusive (or safe) access to data, resources


Abstractions for managing concurrency
 - atomic variables guarantee strict ordering of operations

Example

Variable X:  ***** <- range of memory locations

To modify X, you...
  - load X into a register
  - modify the register
  - save the register to X

What if two cores try to modify X at the same time?
  - both could have X in a register at the same time

X is 5
core A increments by 1
core B decrements by 1

What's X after both have run?

If B changes X but doesn't see A's change...you get 4
If A changes X but doesn't see B's change...you get 6

For X to be atomic, ^^^ shouldn't be possible
 - has to make sure only one core accesses X at a time

Atomic variables are typically used to build semaphores...

But what we really want is mutual exclusion

 - only one thread/process/CPU can access the resource at a time

Want mutual exclusion for larger resources, e.g. data structures, devices

Higher-level languages directly support mutual exclusion, e.g.
Java synchronized methods

Semaphores are an analogy to a railroad control mechanism
 - only one train on the track!

When a train enters the shared track
 - grab the semaphore (increment atomic variable)
 - if not available, wait (check first to see if it is 0)

When a train leaves the shared track
 - release the semaphore (decrement atomic variable)

Lesson: don't implement semaphores yourself, please!

In the Linux kernel
 - locks using semaphores of various types are associated with all kinds
   of data structures
   e.g. inodes

When you can't get the lock, what do you do?
 - sleep (tell the scheduler to wake you up when the lock is available)
 - deal with not getting the resource
 - spin (do a null loop)

If the wait time is short, spinning is the fastest thing you can do

Where possible, everything should be exclusive, minimize sharing

But in userspace...
 - shared memory...NOOOOOOO
    - unless you use atomic variable, or something higher level
      like a semaphore
 - messages
    - series of messages can be used to synchronize state

3-way handshakes, e.g. opening a TCP connection
 - SYN (synchronize) client->server
 - SYN-ACK (acknowledge SYN) server->client
 - ACK (acknowledge) client->server

Can send messages between processes
 - signals!
 - other ways too


concurrency pattern: producer/consumer
 * two processes P and C
 * P "makes" things, C "consumes" them
 * want to coordinate P and C
   - P shouldn't produce too much (C gets too far behind)
   - C shouldn't waste time waiting for P

 * shared circular buffer for storing output of P and input of C
 * when buffer is empty, C should sleep
 * when buffer is full, P should sleep