<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/index.php?action=history&amp;feed=atom&amp;title=Operating_Systems_2022F_Lecture_14</id>
	<title>Operating Systems 2022F Lecture 14 - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/index.php?action=history&amp;feed=atom&amp;title=Operating_Systems_2022F_Lecture_14"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2022F_Lecture_14&amp;action=history"/>
	<updated>2026-04-05T22:41:56Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2022F_Lecture_14&amp;diff=24128&amp;oldid=prev</id>
		<title>Soma: Created page with &quot;==Video==  Video from the lecture given on November 3, 2022 is now available: * [https://homeostasis.scs.carleton.ca/~soma/os-2022f/lectures/comp3000-2022f-lec14-20221103.m4v video] * [https://homeostasis.scs.carleton.ca/~soma/os-2022f/lectures/comp3000-2022f-lec14-20221103.cc.vtt auto-generated captions] Video is also available through Brightspace (Resources-&gt;Zoom meeting-&gt;Cloud Recordings tab)  ==Notes==  &lt;pre&gt; Lecture 14 ----------  Assignment 3 has not yet been relea...&quot;</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2022F_Lecture_14&amp;diff=24128&amp;oldid=prev"/>
		<updated>2022-11-03T20:12:55Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;==Video==  Video from the lecture given on November 3, 2022 is now available: * [https://homeostasis.scs.carleton.ca/~soma/os-2022f/lectures/comp3000-2022f-lec14-20221103.m4v video] * [https://homeostasis.scs.carleton.ca/~soma/os-2022f/lectures/comp3000-2022f-lec14-20221103.cc.vtt auto-generated captions] Video is also available through Brightspace (Resources-&amp;gt;Zoom meeting-&amp;gt;Cloud Recordings tab)  ==Notes==  &amp;lt;pre&amp;gt; Lecture 14 ----------  Assignment 3 has not yet been relea...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;==Video==&lt;br /&gt;
&lt;br /&gt;
Video from the lecture given on November 3, 2022 is now available:&lt;br /&gt;
* [https://homeostasis.scs.carleton.ca/~soma/os-2022f/lectures/comp3000-2022f-lec14-20221103.m4v video]&lt;br /&gt;
* [https://homeostasis.scs.carleton.ca/~soma/os-2022f/lectures/comp3000-2022f-lec14-20221103.cc.vtt auto-generated captions]&lt;br /&gt;
Video is also available through Brightspace (Resources-&amp;gt;Zoom meeting-&amp;gt;Cloud Recordings tab)&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Lecture 14&lt;br /&gt;
----------&lt;br /&gt;
&lt;br /&gt;
Assignment 3 has not yet been released&lt;br /&gt;
 - hoping to post tomorrow&lt;br /&gt;
 - due date will be 11th at earliest, will update once assignment is posted&lt;br /&gt;
&lt;br /&gt;
Tutorial 7 is *almost* ready, that will be out by tomorrow definitely&lt;br /&gt;
 - you can start looking at it now if you want&lt;br /&gt;
&lt;br /&gt;
So tutorial due dates move with the assignment due dates&lt;br /&gt;
 - so you have some more time on 5 and 6&lt;br /&gt;
&lt;br /&gt;
A3 is based on T5 and T6, NOT T7&lt;br /&gt;
&lt;br /&gt;
Deadlock&lt;br /&gt;
--------&lt;br /&gt;
 - one class of concurrency bugs&lt;br /&gt;
 - see the textbook for others&lt;br /&gt;
&lt;br /&gt;
What is deadlock?&lt;br /&gt;
 - type of situation in which progress on a computation stops because everyone is waiting on a resource that will never become available because another part of the system is holding onto it&lt;br /&gt;
&lt;br /&gt;
Dining philopher&amp;#039;s problem&lt;br /&gt;
 - five philosophers sitting around the table&lt;br /&gt;
 - have food in front of them that they need 2 utensils to eat&lt;br /&gt;
 - but there are only 5 utensils at the table&lt;br /&gt;
 - problem: how do we make sure that the philosophers eat?&lt;br /&gt;
&lt;br /&gt;
What algorithm works?&lt;br /&gt;
&lt;br /&gt;
Think of the basic approach:&lt;br /&gt;
 - grab the utensil on the left&lt;br /&gt;
 - grab the one on the right&lt;br /&gt;
 - eat&lt;br /&gt;
 - put utensil on the right down&lt;br /&gt;
 - put utensil on the left down&lt;br /&gt;
&lt;br /&gt;
Why can this result in everyone starving?&lt;br /&gt;
&lt;br /&gt;
If everyone tries to eat at the same time, they will all have a left fork&lt;br /&gt;
and will never put it down&lt;br /&gt;
&lt;br /&gt;
What if we add a one second timeout?  (i.e., if you don&amp;#039;t get the right utensil in one second you put down the left one)&lt;br /&gt;
 - but then you get everyone picking up and putting down the left utensil in unison, they all still starve&lt;br /&gt;
&lt;br /&gt;
you can add in random timeout, that mostly works&lt;br /&gt;
 - but the pauses can go on for arbitrarily long&lt;br /&gt;
&lt;br /&gt;
Dining philosopher&amp;#039;s problem results in deadlock because four conditions all hold:&lt;br /&gt;
 - mutual exclusion (only one has a utensil at a time)&lt;br /&gt;
 - hold-and-wait (holding on to some resources when they don&amp;#039;t have all that&lt;br /&gt;
   they need to eat)&lt;br /&gt;
 - no preemption (no stealing utensils)&lt;br /&gt;
 - circular wait (they are sitting in a circle)&lt;br /&gt;
&lt;br /&gt;
All these have to hold to get deadlock, break any one of them and you can avoid it&lt;br /&gt;
&lt;br /&gt;
The trick with solutions to this problem isn&amp;#039;t to avoid deadlock, it is to avoid deadlock *efficiently*&lt;br /&gt;
 - minimize delays due to resource contention&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Classic way to avoid deadlock issues is to just have one master lock&lt;br /&gt;
 - used by old versions of Linux, still in place (I think) in the standard Python runtime&lt;br /&gt;
&lt;br /&gt;
But if you do this, then you minimize parallelism&lt;br /&gt;
 - only one lock means every time a shared resource is needed, we&amp;#039;re down to a single thread running&lt;br /&gt;
&lt;br /&gt;
In a parallel system, you want all parts of the system to keep running all the time&lt;br /&gt;
 - waiting on other threads/processes reduces overall performance&lt;br /&gt;
 - how to minimize the waiting for resources (resource contention)?&lt;br /&gt;
&lt;br /&gt;
Temptation is to make LOTS of locks, one for each resource&lt;br /&gt;
 - but the more locks we add, the higher risk of deadlock&lt;br /&gt;
&lt;br /&gt;
Simplest solution to avoid deadlock is to always allocate resources (locks) in a fixed order&lt;br /&gt;
 - A, B, C but never A, C, B&lt;br /&gt;
 - avoids circular dependence&lt;br /&gt;
&lt;br /&gt;
But what in your code enforces that you grab locks always in the same order?&lt;br /&gt;
NOTHING&lt;br /&gt;
&lt;br /&gt;
Why is there so much discussion of deadlock in concurrency theory and in OS courses?&lt;br /&gt;
 - easy to formalize&lt;br /&gt;
 - many possible algorithms&lt;br /&gt;
&lt;br /&gt;
In practice, we add in timeouts and try to allocate resources in a fixed order&lt;br /&gt;
&lt;br /&gt;
In Tutorial 6:&lt;br /&gt;
 - 3000pc-fifo uses pipes, works&lt;br /&gt;
 - 3000pc-rendezvous uses shared memory&lt;br /&gt;
    - is fast, but deadlocks&lt;br /&gt;
 - 3000pc-rendezvous uses shared memory, timeouts&lt;br /&gt;
    - not as fast because it pauses randomly, but works&lt;br /&gt;
&lt;br /&gt;
Note that the shared memory versions use functions from the pthread library&lt;br /&gt;
(which is specially linked in).  pthread is a standard UNIX library (part of POSIX), but it is implemented differently on different systems, because system calls related to concurrency aren&amp;#039;t standardized&lt;br /&gt;
 - note the clone system call, used to make threads, is not standard, it is Linux specific&lt;br /&gt;
&lt;br /&gt;
Why does 3000pc-rendezvous deadlock?  What is happening?&lt;br /&gt;
 - I didn&amp;#039;t realize this would happen when I made it&lt;br /&gt;
 - 3000pc-rendezvous-timeout was a bugfix!&lt;br /&gt;
&lt;br /&gt;
So what actually happens?&lt;br /&gt;
 - both the producer and consumer go to sleep at the same time&lt;br /&gt;
&lt;br /&gt;
Compare 3000pc-rendezvous and 3000pc-rendezvous-timeout&lt;br /&gt;
 - logic in 3000pc-rendezvous is much simpler, but it deadlock&lt;br /&gt;
 - all 3000pc-rendezvous-timeout does is adds timeouts to waiting for all locks and condition variables, because sometimes things go wrong&lt;br /&gt;
&lt;br /&gt;
spinlocks are a kind of mutex which &amp;quot;busy waits&amp;quot;:&lt;br /&gt;
 - when waiting for the lock to be available, just do while(1)&lt;br /&gt;
    - so CPU is &amp;quot;spinning&amp;quot; while waiting for the lock to be available&lt;br /&gt;
 - not so efficient, but most efficient strategy when wait should be short&lt;br /&gt;
 &lt;br /&gt;
But if the wait is going to be longer, you don&amp;#039;t want to consume CPU time doing nothing - that just makes your CPU hot for no good reason&lt;br /&gt;
&lt;br /&gt;
Instead, we want to sleep while waiting, to get woken up when the resource is available&lt;br /&gt;
&lt;br /&gt;
The mutex&amp;#039;s we are using sleep rather than busy wait&lt;br /&gt;
&lt;br /&gt;
Why not always sleep?  For that we have to understand CPU scheduling&lt;br /&gt;
&lt;br /&gt;
When you look at the STAT column in ps, you&amp;#039;ll see many letter codes&lt;br /&gt;
 - S: sleeping&lt;br /&gt;
 - R: running&lt;br /&gt;
 - I: idle kernel thread&lt;br /&gt;
 - Z: zombie (terminated, waiting for wait)&lt;br /&gt;
&lt;br /&gt;
What is the difference between a process sleeping and running?&lt;br /&gt;
 - running: it is executing instructions on a CPU core&lt;br /&gt;
 - sleeping: it is waiting to execute instructions, it isn&amp;#039;t doing anything now&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
process executing&lt;br /&gt;
 - in running mode&lt;br /&gt;
 - on a CPU core in user mode&lt;br /&gt;
 - registers hold process&amp;#039;s state&lt;br /&gt;
&lt;br /&gt;
runnable process&lt;br /&gt;
 - could be on a CPU or could be waiting&lt;br /&gt;
 - is not waiting on any events, just needs to get back on the CPU&lt;br /&gt;
&lt;br /&gt;
process sleep&lt;br /&gt;
 - in sleep mode&lt;br /&gt;
 - NOT on a CPU core, instead state is stored in process data structure&lt;br /&gt;
   in kernel&lt;br /&gt;
   - thus has no state in registers&lt;br /&gt;
 - waiting for an event to happen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Process states are just an abstraction over how the CPU is shared between processes&lt;br /&gt;
 - a CPU core can be executing a process OR it can be executing kernel code&lt;br /&gt;
 - when executing a process, it is in user mode&lt;br /&gt;
 - when executing the kernel, it is in supervisor mode&lt;br /&gt;
&lt;br /&gt;
How does the CPU switch between running a process in user mode and the kernel in supervisor mode?&lt;br /&gt;
&lt;br /&gt;
One way: the process makes a system call&lt;br /&gt;
 - syscall instruction causes CPU to switch to kernel mode, jump to&lt;br /&gt;
   system call handler routine in the kernel and execution proceeds from there&lt;br /&gt;
&lt;br /&gt;
BUT, what if a process never makes a system call?&lt;br /&gt;
 - say if it is just crunching some data?&lt;br /&gt;
&lt;br /&gt;
Interrupts are events generated by the hardware that have to be handled by the kernel&lt;br /&gt;
 - hardware generates interrupt&lt;br /&gt;
 - CPU switches to supervisor mode, handles the interrupt event&lt;br /&gt;
&lt;br /&gt;
What sort of hardware generate interrupts? well, basically everything except for RAM (and maybe it does on some systems if it detects errors)&lt;br /&gt;
 - keyboard, mouse&lt;br /&gt;
 - CPU (divide by zero, syscall instructions, etc)&lt;br /&gt;
 - SSD&lt;br /&gt;
 - sound, video&lt;br /&gt;
 - network&lt;br /&gt;
 - timers  &amp;lt;--- very important&lt;br /&gt;
&lt;br /&gt;
EVERY time you hit a key, some part of the system has to handle the keypress&lt;br /&gt;
 - to a first approximation, every keypress generates an interrupt&lt;br /&gt;
&lt;br /&gt;
Basically when a device needs attention from the CPU, it sends the CPU an interrupt&lt;br /&gt;
 - the interrupt causes the CPU to stop what it is doing and run code&lt;br /&gt;
   registered as the designated handler for that interrupt&lt;br /&gt;
&lt;br /&gt;
signals are like interrupt handlers for processes&lt;br /&gt;
 - but it is a metaphorical connection, a signal is an OS abstraction&lt;br /&gt;
   while interrupts are a feature of the hardware&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Fun fact&lt;br /&gt;
 - on old macs (before OS X), if you held the mouse button down&lt;br /&gt;
   it wouldn&amp;#039;t be able to receive any network traffic and in fact all&lt;br /&gt;
   running programs would pause&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&amp;#039;ve discussed here how we go from running a process to running the kernel&lt;br /&gt;
 - basically, interrupts&lt;br /&gt;
&lt;br /&gt;
How do we get from the kernel to running a process?  that&amp;#039;s scheduling, and&lt;br /&gt;
is for next time!&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Soma</name></author>
	</entry>
</feed>