<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Afry</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Afry"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Afry"/>
	<updated>2026-04-22T08:47:09Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebFund_2015W:_Assignment_10&amp;diff=20073</id>
		<title>WebFund 2015W: Assignment 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebFund_2015W:_Assignment_10&amp;diff=20073"/>
		<updated>2015-03-31T14:58:19Z</updated>

		<summary type="html">&lt;p&gt;Afry: There was a small edit to make the question more understandable.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;This assignment is not yet finalized.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
10 points + 1 bonus&lt;br /&gt;
&lt;br /&gt;
Fix [http://homeostasis.scs.carleton.ca/~soma/webfund-2015w/code/tls-notes.zip tls-notes]&lt;br /&gt;
&lt;br /&gt;
# [1] Make the server connect using TLS.&lt;br /&gt;
# [1] Make the &amp;quot;Register&amp;quot; button work.&lt;br /&gt;
# [1] Fix the password entry field so it is the same width as the username entry field.&lt;br /&gt;
# [1] Make the Edit Note screen show the old note title.&lt;br /&gt;
# [1] Fix the &amp;quot;New Note&amp;quot; button.&lt;br /&gt;
# [1] Fix the &amp;quot;Delete&amp;quot; button.&lt;br /&gt;
# [1] Fix &amp;quot;Change Username&amp;quot; so it doesn&#039;t generate a 500 error.&lt;br /&gt;
# [1] Fix &amp;quot;Change Username&amp;quot; so that it puts a &amp;quot;&#039;s&amp;quot; to the end of the username in the page title immediately after successfully changing the username (i.e., without requiring the page to be reloaded).&lt;br /&gt;
# [2] Fix &amp;quot;Change Username&amp;quot; so it updates the users collection as well as the notes collection.&lt;br /&gt;
# [BONUS 1] Regenerate the keys so they have the same basic metadata except that the contact email address is yours.  (Don&#039;t try to make the creation and expiry dates or the key hash the same.  Focus on the data that is normally entered when creating a certificate.)&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebFund_2015W:_Assignment_7&amp;diff=19995</id>
		<title>WebFund 2015W: Assignment 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebFund_2015W:_Assignment_7&amp;diff=19995"/>
		<updated>2015-03-16T14:09:32Z</updated>

		<summary type="html">&lt;p&gt;Afry: changed name of directory&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In this assignment you will be modifying [http://homeostasis.scs.carleton.ca/~soma/webfund-2015w/code/ajax-notes.zip the AJAX notes demo] from [[WebFund 2015W: Tutorial 7|Tutorial 7]].  There are 10 points in 3 tasks, 7 listed below and 3 for code style.  This assignment is due by 10 AM on &amp;lt;del&amp;gt;Monday, March 16, 2015&amp;lt;/del&amp;gt; &#039;&#039;&#039;Wednesday, March 18, 2015&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Please submit your answers as a zip file called &amp;quot;&amp;lt;username&amp;gt;-comp2406-assign7.zip&amp;quot;, where username is your MyCarletonOne username.  This zip file should uncompress to a directory called &amp;quot;&amp;lt;username&amp;gt;-comp2406-assign7&amp;quot; and inside this directory should be two things: a directory &amp;quot;ajax-notes&amp;quot; that contains the application and a text file &amp;quot;comments.txt&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;comments.txt&amp;quot; should:&lt;br /&gt;
* list any references you used to complete the assignment (documentation web sites, for example),&lt;br /&gt;
* list your collaborators, and&lt;br /&gt;
* optionally, should discuss any issues or concerns you had when completing this assignment.&lt;br /&gt;
Remember that while you are allowed to collaborate with others, plagiarism is not allowed.  In other words you &#039;&#039;&#039;should not&#039;&#039;&#039; be copying any code or data directly from anywhere, and any assistance or inspiration should be credited.  Any significant code similarity (beyond the code already given to you) will be considered plagiarism and will be reported to the Dean.&lt;br /&gt;
&lt;br /&gt;
==Tasks==&lt;br /&gt;
# [2] Implement a [delete] button on the edit note view (with an id of &amp;quot;delete&amp;quot;).  When pressed it should put up a [https://developer.mozilla.org/en-US/docs/Web/API/window/confirm confirm modal dialog box] that says &#039;Delete note &amp;quot;The note title&amp;quot;&#039;?  (Replace &amp;quot;The note title&amp;quot; with the actual title of the note to delete.)  If the user says okay then it should do a post to &amp;quot;/deleteNote&amp;quot; where the form body contains an &amp;quot;id&amp;quot; value that has the _id of the note to be deleted.  This post should return &amp;quot;note deleted&amp;quot; upon success or &amp;quot;ERROR: note not deleted&amp;quot; upon failure.  When the POST returns the page should be refreshed with the current list of notes.&lt;br /&gt;
# [2] Make it so the contents of notes escape embedded HTML tags.  However, allow links to be embedded in notes using the syntax of &amp;quot;[&amp;lt;link&amp;gt; &amp;lt;label&amp;gt;]&amp;quot; where link is a URL for an &amp;lt;a&amp;gt; tag and the label (the rest of the text in the square brackets) is the label for the URL.  If there is no label then the link itself should be the &amp;lt;a&amp;gt; tag&#039;s label.&lt;br /&gt;
# [3] Add a Change Username button, besides the refresh button, that allows you to change the username for a user.&lt;br /&gt;
#* This button should have an &amp;lt;tt&amp;gt;id=changeusername&amp;lt;/tt&amp;gt; assigned.  It should cause the notesArea to be replaced with an interface for changing the username.&lt;br /&gt;
#* The text field for the changed username should have a &amp;lt;tt&amp;gt;id=&amp;quot;username&amp;quot;&amp;lt;/tt&amp;gt;.  Below this should be two buttons, &amp;quot;Change Username&amp;quot; (with id of &amp;quot;doChangeUsername&amp;quot;) and &amp;quot;Cancel&amp;quot; (with id of &amp;quot;cancelUsernameChange&amp;quot;).&lt;br /&gt;
#* The Cancel button should cause the notes list to be redrawn.  The Change Username button should do a POST to /changeusername to actually do the username change (with a body value of newUsername for the new username).  When this post returns, if it was successful (returning a value &amp;quot;username changed&amp;quot;) it should change the username on the page.  It should NOT change the username if the name change failed (returning a value &amp;quot;ERROR: username not changed&amp;quot;).  When the post returns the notes list should be redrawn.&lt;br /&gt;
#* The new username can be any username that is not currently being used by any stored notes.  Note that you&#039;ll need to change the username in the session and update the owner field in stored notes as appropriate.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebFund_2015W:_Tutorial_1&amp;diff=19650</id>
		<title>WebFund 2015W: Tutorial 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebFund_2015W:_Tutorial_1&amp;diff=19650"/>
		<updated>2015-01-13T17:59:22Z</updated>

		<summary type="html">&lt;p&gt;Afry: /* ASIDE: if you are unsure of your account information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For this class you will be using a [http://www.lubuntu.net/ Lubuntu] virtual machine appliance.  We will be using [https://www.virtualbox.org/ VirtualBox] as our preferred virtualization platform; however, VMware Workstation/Fusion and other virtualization platforms should be able to run the appliance as well.  In this first tutorial you will be becoming familiar with [http://nodejs.org/ node.js]-based development environment provided by this appliance.&lt;br /&gt;
&lt;br /&gt;
To get credit for this lab, show a TA or the instructor that you have gotten the class VM running, made simple changes to your first web app, and that you have started lessons on CodeAcademy (or convince them you don&#039;t need to).&lt;br /&gt;
&lt;br /&gt;
If you finish early (which you are likely to do), try exploring node and the Lubuntu environment.  You will be using them a lot this semester!&lt;br /&gt;
&lt;br /&gt;
==Running the VM==&lt;br /&gt;
&lt;br /&gt;
In the SCS labs you should be able to run the VM by starting Virtualbox (listed in the Applications menu) and selecting the COMP 2406 virtual machine.  After the VM has fully booted up you should be logged in automatically as the user &amp;quot;student&amp;quot;.  If the screen locks or you need administrative access (via sudo) you&#039;ll need the password for the student account, however.  This password is &amp;quot;tneduts!&amp;quot; (students backwards followed by an !).  There is also an admin account in case your student account gets corrupted for any reason.  The password for it is &amp;quot;nimda!&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
We highly recommend running your VM in full-screen mode.  (Don&#039;t maximize the window; instead select full screen from the view menu.)  Do all of your work inside of the VM; it should be fast enough and you won&#039;t have any issues with sharing files or with host firewalls.&lt;br /&gt;
&lt;br /&gt;
If you want to run the appliance on your own system (running essentially any desktop operating system you want), just [http://homeostasis.scs.carleton.ca/~soma/VMs/COMP%202406%20Winter%202015.ova download the &lt;br /&gt;
virtual appliance file] and import.  The SHA1 hash of this file is:&lt;br /&gt;
&lt;br /&gt;
  47849f3c5a4b11e1c701bd95ba4bb8f88062d8ba  COMP 2406 Winter 2015.ova&lt;br /&gt;
&lt;br /&gt;
On Windows you can compute this hash for your downloaded file using the command [http://support.microsoft.com/kb/889768 &amp;lt;tt&amp;gt;FCIV -sha1 COMP 2406 Winter 2015.ova&amp;lt;/tt&amp;gt;].  If the hash is different from above, your download has been corrupted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If your virtualization application is not VirtualBox, you&#039;ll need to:&lt;br /&gt;
* Have the VM platform ignore any errors in the structure of the appliance when importing;&lt;br /&gt;
* Uninstall the VirtualBox guest additions by typing starting a terminal application and running&lt;br /&gt;
   sudo /opt/VBoxGuestAdditions-4.3.10/uninstall.sh&lt;br /&gt;
* Install your platform&#039;s own Linux guest additions, if available.&lt;br /&gt;
&lt;br /&gt;
Note as we will explain, you will have the ability to easily save the work you do from any VM to your SCS account and restore it to any other copy of the class VM.  Thus feel free to play around with VMs; if you break anything, you can always revert.  Remember though that in the labs you &#039;&#039;&#039;must&#039;&#039;&#039; save and restore your work, as all of your changes to the VM will be lost when you logout!&lt;br /&gt;
&lt;br /&gt;
While you may update the software in the VM, those updates will be lost when you next login to the lab machines; thus, you probably only want to update a VM installed on your own system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Hello, World!==&lt;br /&gt;
&lt;br /&gt;
To create your first node application, start [http://www.geany.org/ geany], [http://brackets.io/ brackets], [http://www.vim.org/ vim], or [http://www.gnu.org/software/emacs/ emacs] code editors by clicking on their quick launch icons at the bottom left of the screen (beside the LXDE start menu button).&lt;br /&gt;
&lt;br /&gt;
(If you are a fan of vi but want to try emacs, you should type Alt-X [http://www.gnu.org/software/emacs/manual/html_mono/viper.html viper-mode].  You&#039;re welcome.)&lt;br /&gt;
&lt;br /&gt;
In your editor of choice, create a file &amp;lt;tt&amp;gt;hello.js&amp;lt;/tt&amp;gt; in your Documents folder with the following contents:&lt;br /&gt;
&lt;br /&gt;
   console.log(&amp;quot;Hello World!&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
You can now run this file by opening an LXTerminal (under Accessories) and typing:&lt;br /&gt;
&lt;br /&gt;
   cd Documents&lt;br /&gt;
   node hello.js&lt;br /&gt;
&lt;br /&gt;
And you should see &amp;lt;tt&amp;gt;Hello, World!&amp;lt;/tt&amp;gt; output to your terminal.&lt;br /&gt;
&lt;br /&gt;
You can also run node interactively by simply running &amp;lt;tt&amp;gt;node&amp;lt;/tt&amp;gt; with no arguments.  You&#039;ll then get a prompt where you can enter any code that you like and see what it does.  To exit this environment, type Control-D.&lt;br /&gt;
&lt;br /&gt;
Note that when run interactively, we say that node is running a [http://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop read-eval-print loop] (REPL).  It reads input, evaluates it, and then prints the results.  This structure is very old in computer science, going back to the first LISP interpreters from the early 1960&#039;s.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Your First Web App==&lt;br /&gt;
&lt;br /&gt;
Web applications, even simple ones, are a bit more complex than our &amp;quot;Hello, world!&amp;quot; example.  Fortunately in node we have the [http://expressjs.com/ express] web application framework to make getting up and running quite easy.&lt;br /&gt;
&lt;br /&gt;
Follow the directions for the [http://expressjs.com/starter/generator.html express application generator] in a terminal window.  In short form, you should run the following commands to make &amp;quot;myapp&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
  sudo npm install express-generator -g&lt;br /&gt;
  express myapp&lt;br /&gt;
  cd myapp&lt;br /&gt;
  npm install&lt;br /&gt;
  DEBUG=myapp ./bin/www&lt;br /&gt;
&lt;br /&gt;
To see what your app is doing, start up a web browser in your VM and visit the following URL:&lt;br /&gt;
&lt;br /&gt;
  http://localhost:3000&lt;br /&gt;
&lt;br /&gt;
You should see a message from your first web application!&lt;br /&gt;
&lt;br /&gt;
If you have any problems, particularly with network connections, you can get and run the basic app by instead doing the following:&lt;br /&gt;
&lt;br /&gt;
  wget http://homeostasis.scs.carleton.ca/~soma/webfund-2015w/code/myapp.zip&lt;br /&gt;
  unzip myapp&lt;br /&gt;
  cd myapp&lt;br /&gt;
  DEBUG=myapp ./bin/www&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Observing/Debugging the Web App==&lt;br /&gt;
&lt;br /&gt;
When you attempt to observe or debug a web application, you have to keep an eye on what happens on the server, what happens on the client (web browser), and what happens on the network in between.&lt;br /&gt;
&lt;br /&gt;
You can use the browser&#039;s built-in developer tools to look at the network activity (as seen by the browser of course) and what happens with the actual rendering of the web page.  In either Firefox or Chromium (the open source version of Chrome) you can get access to the developer tools by typing the key sequence Shift-Control-I or by selecting the developer tools from the menu.  Pay particular attention to these tabs:&lt;br /&gt;
* The Network tab tells you about what has been sent and received.&lt;br /&gt;
* The Inspector or Elements tab lets you examine the DOM (document object model), i.e., the parsed version of the web page.&lt;br /&gt;
* The Console tab gives you a JavaScript prompt in the context of the web page.&lt;br /&gt;
For example, to find out the user agent string of the current browser type &amp;quot;navigator.userAgent&amp;quot; in the Console tab.&lt;br /&gt;
&lt;br /&gt;
To find out what is happening on the server side, for now you&#039;ll have to either add console.log() or console.error() statements to print things out.  Later we will learn how to use the node debugger.&lt;br /&gt;
&lt;br /&gt;
==Simple Changes==&lt;br /&gt;
&lt;br /&gt;
Now that you have an app up and running, make the following simple changes:&lt;br /&gt;
* Change the default port to 2000 (by editing bin/www)&lt;br /&gt;
* Change the title to &amp;quot;My First Web App&amp;quot; (&amp;lt;tt&amp;gt;routes/index.js&amp;lt;/tt&amp;gt;)&lt;br /&gt;
* Prevent the default stylesheet &amp;lt;tt&amp;gt;style.css&amp;lt;/tt&amp;gt; from being loaded (&amp;lt;tt&amp;gt;views/layout.jade&amp;lt;/tt&amp;gt;)&lt;br /&gt;
* Add a paragraph to the initial page saying &amp;quot;This page is pretty boring.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Note that the Jade files (ending in .jade) are used to generate HTML files.  Compare the source of the page as seen by the web browser to the original .jade files to see how they connect.  Also, you can look at the [http://jade-lang.com/reference/ Jade documentation].&lt;br /&gt;
&lt;br /&gt;
We don&#039;t expect you to understand everything that you are doing in this tutorial; these exercises are more designed to get your feet wet tos you can start asking the right kinds of questions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Saving your work==&lt;br /&gt;
&lt;br /&gt;
You can save your work to your SCS account by running (where it says &amp;lt;SCS username&amp;gt; it should just be your username no &amp;lt;&amp;gt;brackets) &lt;br /&gt;
&lt;br /&gt;
  save2406 &amp;lt;SCS username&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will rsync /home/student to the COMP2406 directory in your SCS account by connecting to access.scs.carleton.ca.&lt;br /&gt;
&lt;br /&gt;
When you wish to restore your student account, run&lt;br /&gt;
&lt;br /&gt;
  restore2406 &amp;lt;SCS username&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that both of these commands are destructive - they will wipe out all the files in the COMP2406 folder on SCS or /home/student in your VM.  If you want to see what the differences are between the two versions, run&lt;br /&gt;
&lt;br /&gt;
  compare2406 &amp;lt;SCS username&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== ASIDE: If you are unsure of your account information (Please read this) ===&lt;br /&gt;
&lt;br /&gt;
The SCS account would be your access.scs.carleton.ca account (which is your unix/linux account) if you do not have one or do not remember your password, please go to the main scs page [http://www.scs.carleton.ca here], click on Tech Support (top right corner), then click on accounts, Linux accounts and click the link [http://www.scs.carleton.ca/webacct/ http://www.scs.carleton.ca/webacct/], read the policy, accept, and proceed to retrieve your account information / setup your account. You will then use this password for all future save and restore operations!!)&lt;br /&gt;
&lt;br /&gt;
==CodeAcademy==&lt;br /&gt;
&lt;br /&gt;
Now that you&#039;ve got your virtual machine running, it is time to start learning about web technologies.  If you haven&#039;t already, you should either go through or make sure you know the material in all of the following CodeAcademy modules:&lt;br /&gt;
* [http://www.codecademy.com/tracks/web Web Fundamentals]&lt;br /&gt;
* [http://www.codecademy.com/tracks/javascript/ Javascript]&lt;br /&gt;
* [http://www.codecademy.com/tracks/jquery jQuery]&lt;br /&gt;
* [http://www.codecademy.com/tracks/projects Web Projects] (just the basic projects)&lt;br /&gt;
&lt;br /&gt;
Feel free to skip around; these should be very simple for you, at least at the beginning.  Try to do the last parts of each lesson to see if need to bother going through it.  You&#039;ll be expected to be familiar with most of this material starting in next week&#039;s tutorial (and Assignment 2).&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebFund_2015W:_Tutorial_1&amp;diff=19649</id>
		<title>WebFund 2015W: Tutorial 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebFund_2015W:_Tutorial_1&amp;diff=19649"/>
		<updated>2015-01-13T17:58:19Z</updated>

		<summary type="html">&lt;p&gt;Afry: /* Saving your work */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For this class you will be using a [http://www.lubuntu.net/ Lubuntu] virtual machine appliance.  We will be using [https://www.virtualbox.org/ VirtualBox] as our preferred virtualization platform; however, VMware Workstation/Fusion and other virtualization platforms should be able to run the appliance as well.  In this first tutorial you will be becoming familiar with [http://nodejs.org/ node.js]-based development environment provided by this appliance.&lt;br /&gt;
&lt;br /&gt;
To get credit for this lab, show a TA or the instructor that you have gotten the class VM running, made simple changes to your first web app, and that you have started lessons on CodeAcademy (or convince them you don&#039;t need to).&lt;br /&gt;
&lt;br /&gt;
If you finish early (which you are likely to do), try exploring node and the Lubuntu environment.  You will be using them a lot this semester!&lt;br /&gt;
&lt;br /&gt;
==Running the VM==&lt;br /&gt;
&lt;br /&gt;
In the SCS labs you should be able to run the VM by starting Virtualbox (listed in the Applications menu) and selecting the COMP 2406 virtual machine.  After the VM has fully booted up you should be logged in automatically as the user &amp;quot;student&amp;quot;.  If the screen locks or you need administrative access (via sudo) you&#039;ll need the password for the student account, however.  This password is &amp;quot;tneduts!&amp;quot; (students backwards followed by an !).  There is also an admin account in case your student account gets corrupted for any reason.  The password for it is &amp;quot;nimda!&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
We highly recommend running your VM in full-screen mode.  (Don&#039;t maximize the window; instead select full screen from the view menu.)  Do all of your work inside of the VM; it should be fast enough and you won&#039;t have any issues with sharing files or with host firewalls.&lt;br /&gt;
&lt;br /&gt;
If you want to run the appliance on your own system (running essentially any desktop operating system you want), just [http://homeostasis.scs.carleton.ca/~soma/VMs/COMP%202406%20Winter%202015.ova download the &lt;br /&gt;
virtual appliance file] and import.  The SHA1 hash of this file is:&lt;br /&gt;
&lt;br /&gt;
  47849f3c5a4b11e1c701bd95ba4bb8f88062d8ba  COMP 2406 Winter 2015.ova&lt;br /&gt;
&lt;br /&gt;
On Windows you can compute this hash for your downloaded file using the command [http://support.microsoft.com/kb/889768 &amp;lt;tt&amp;gt;FCIV -sha1 COMP 2406 Winter 2015.ova&amp;lt;/tt&amp;gt;].  If the hash is different from above, your download has been corrupted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If your virtualization application is not VirtualBox, you&#039;ll need to:&lt;br /&gt;
* Have the VM platform ignore any errors in the structure of the appliance when importing;&lt;br /&gt;
* Uninstall the VirtualBox guest additions by typing starting a terminal application and running&lt;br /&gt;
   sudo /opt/VBoxGuestAdditions-4.3.10/uninstall.sh&lt;br /&gt;
* Install your platform&#039;s own Linux guest additions, if available.&lt;br /&gt;
&lt;br /&gt;
Note as we will explain, you will have the ability to easily save the work you do from any VM to your SCS account and restore it to any other copy of the class VM.  Thus feel free to play around with VMs; if you break anything, you can always revert.  Remember though that in the labs you &#039;&#039;&#039;must&#039;&#039;&#039; save and restore your work, as all of your changes to the VM will be lost when you logout!&lt;br /&gt;
&lt;br /&gt;
While you may update the software in the VM, those updates will be lost when you next login to the lab machines; thus, you probably only want to update a VM installed on your own system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Hello, World!==&lt;br /&gt;
&lt;br /&gt;
To create your first node application, start [http://www.geany.org/ geany], [http://brackets.io/ brackets], [http://www.vim.org/ vim], or [http://www.gnu.org/software/emacs/ emacs] code editors by clicking on their quick launch icons at the bottom left of the screen (beside the LXDE start menu button).&lt;br /&gt;
&lt;br /&gt;
(If you are a fan of vi but want to try emacs, you should type Alt-X [http://www.gnu.org/software/emacs/manual/html_mono/viper.html viper-mode].  You&#039;re welcome.)&lt;br /&gt;
&lt;br /&gt;
In your editor of choice, create a file &amp;lt;tt&amp;gt;hello.js&amp;lt;/tt&amp;gt; in your Documents folder with the following contents:&lt;br /&gt;
&lt;br /&gt;
   console.log(&amp;quot;Hello World!&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
You can now run this file by opening an LXTerminal (under Accessories) and typing:&lt;br /&gt;
&lt;br /&gt;
   cd Documents&lt;br /&gt;
   node hello.js&lt;br /&gt;
&lt;br /&gt;
And you should see &amp;lt;tt&amp;gt;Hello, World!&amp;lt;/tt&amp;gt; output to your terminal.&lt;br /&gt;
&lt;br /&gt;
You can also run node interactively by simply running &amp;lt;tt&amp;gt;node&amp;lt;/tt&amp;gt; with no arguments.  You&#039;ll then get a prompt where you can enter any code that you like and see what it does.  To exit this environment, type Control-D.&lt;br /&gt;
&lt;br /&gt;
Note that when run interactively, we say that node is running a [http://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop read-eval-print loop] (REPL).  It reads input, evaluates it, and then prints the results.  This structure is very old in computer science, going back to the first LISP interpreters from the early 1960&#039;s.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Your First Web App==&lt;br /&gt;
&lt;br /&gt;
Web applications, even simple ones, are a bit more complex than our &amp;quot;Hello, world!&amp;quot; example.  Fortunately in node we have the [http://expressjs.com/ express] web application framework to make getting up and running quite easy.&lt;br /&gt;
&lt;br /&gt;
Follow the directions for the [http://expressjs.com/starter/generator.html express application generator] in a terminal window.  In short form, you should run the following commands to make &amp;quot;myapp&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
  sudo npm install express-generator -g&lt;br /&gt;
  express myapp&lt;br /&gt;
  cd myapp&lt;br /&gt;
  npm install&lt;br /&gt;
  DEBUG=myapp ./bin/www&lt;br /&gt;
&lt;br /&gt;
To see what your app is doing, start up a web browser in your VM and visit the following URL:&lt;br /&gt;
&lt;br /&gt;
  http://localhost:3000&lt;br /&gt;
&lt;br /&gt;
You should see a message from your first web application!&lt;br /&gt;
&lt;br /&gt;
If you have any problems, particularly with network connections, you can get and run the basic app by instead doing the following:&lt;br /&gt;
&lt;br /&gt;
  wget http://homeostasis.scs.carleton.ca/~soma/webfund-2015w/code/myapp.zip&lt;br /&gt;
  unzip myapp&lt;br /&gt;
  cd myapp&lt;br /&gt;
  DEBUG=myapp ./bin/www&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Observing/Debugging the Web App==&lt;br /&gt;
&lt;br /&gt;
When you attempt to observe or debug a web application, you have to keep an eye on what happens on the server, what happens on the client (web browser), and what happens on the network in between.&lt;br /&gt;
&lt;br /&gt;
You can use the browser&#039;s built-in developer tools to look at the network activity (as seen by the browser of course) and what happens with the actual rendering of the web page.  In either Firefox or Chromium (the open source version of Chrome) you can get access to the developer tools by typing the key sequence Shift-Control-I or by selecting the developer tools from the menu.  Pay particular attention to these tabs:&lt;br /&gt;
* The Network tab tells you about what has been sent and received.&lt;br /&gt;
* The Inspector or Elements tab lets you examine the DOM (document object model), i.e., the parsed version of the web page.&lt;br /&gt;
* The Console tab gives you a JavaScript prompt in the context of the web page.&lt;br /&gt;
For example, to find out the user agent string of the current browser type &amp;quot;navigator.userAgent&amp;quot; in the Console tab.&lt;br /&gt;
&lt;br /&gt;
To find out what is happening on the server side, for now you&#039;ll have to either add console.log() or console.error() statements to print things out.  Later we will learn how to use the node debugger.&lt;br /&gt;
&lt;br /&gt;
==Simple Changes==&lt;br /&gt;
&lt;br /&gt;
Now that you have an app up and running, make the following simple changes:&lt;br /&gt;
* Change the default port to 2000 (by editing bin/www)&lt;br /&gt;
* Change the title to &amp;quot;My First Web App&amp;quot; (&amp;lt;tt&amp;gt;routes/index.js&amp;lt;/tt&amp;gt;)&lt;br /&gt;
* Prevent the default stylesheet &amp;lt;tt&amp;gt;style.css&amp;lt;/tt&amp;gt; from being loaded (&amp;lt;tt&amp;gt;views/layout.jade&amp;lt;/tt&amp;gt;)&lt;br /&gt;
* Add a paragraph to the initial page saying &amp;quot;This page is pretty boring.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Note that the Jade files (ending in .jade) are used to generate HTML files.  Compare the source of the page as seen by the web browser to the original .jade files to see how they connect.  Also, you can look at the [http://jade-lang.com/reference/ Jade documentation].&lt;br /&gt;
&lt;br /&gt;
We don&#039;t expect you to understand everything that you are doing in this tutorial; these exercises are more designed to get your feet wet tos you can start asking the right kinds of questions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Saving your work==&lt;br /&gt;
&lt;br /&gt;
You can save your work to your SCS account by running (where it says &amp;lt;SCS username&amp;gt; it should just be your username no &amp;lt;&amp;gt;brackets) &lt;br /&gt;
&lt;br /&gt;
  save2406 &amp;lt;SCS username&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will rsync /home/student to the COMP2406 directory in your SCS account by connecting to access.scs.carleton.ca.&lt;br /&gt;
&lt;br /&gt;
When you wish to restore your student account, run&lt;br /&gt;
&lt;br /&gt;
  restore2406 &amp;lt;SCS username&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that both of these commands are destructive - they will wipe out all the files in the COMP2406 folder on SCS or /home/student in your VM.  If you want to see what the differences are between the two versions, run&lt;br /&gt;
&lt;br /&gt;
  compare2406 &amp;lt;SCS username&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== ASIDE: if you are unsure of your account information ===&lt;br /&gt;
&lt;br /&gt;
The SCS account would be your access.scs.carleton.ca account (which is your unix/linux account) if you do not have one or do not remember your password, please go to the main scs page [http://www.scs.carleton.ca here], click on Tech Support (top right corner), then click on accounts, Linux accounts and click the link [http://www.scs.carleton.ca/webacct/ http://www.scs.carleton.ca/webacct/], read the policy, accept, and proceed to retrieve your account information / setup your account. You will then use this password for all future save and restore operations!!)&lt;br /&gt;
&lt;br /&gt;
==CodeAcademy==&lt;br /&gt;
&lt;br /&gt;
Now that you&#039;ve got your virtual machine running, it is time to start learning about web technologies.  If you haven&#039;t already, you should either go through or make sure you know the material in all of the following CodeAcademy modules:&lt;br /&gt;
* [http://www.codecademy.com/tracks/web Web Fundamentals]&lt;br /&gt;
* [http://www.codecademy.com/tracks/javascript/ Javascript]&lt;br /&gt;
* [http://www.codecademy.com/tracks/jquery jQuery]&lt;br /&gt;
* [http://www.codecademy.com/tracks/projects Web Projects] (just the basic projects)&lt;br /&gt;
&lt;br /&gt;
Feel free to skip around; these should be very simple for you, at least at the beginning.  Try to do the last parts of each lesson to see if need to bother going through it.  You&#039;ll be expected to be familiar with most of this material starting in next week&#039;s tutorial (and Assignment 2).&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_14&amp;diff=19434</id>
		<title>Operating Systems 2014F Lecture 14</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_14&amp;diff=19434"/>
		<updated>2014-11-05T14:51:14Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;How does an operating system know it&#039;s accessed memory it doesn&#039;t have access to? A lot of you said segments. &lt;br /&gt;
&lt;br /&gt;
Filesystems - normally operating system mechanisms are &lt;br /&gt;
&lt;br /&gt;
talking about access to hardware systems, persistent storage.&lt;br /&gt;
- abstraction for persistent storage. Storage that maintains it&#039;s state when it loses power.&lt;br /&gt;
A couple of challenges with persistent storage. What&#039;s weird about storing things in persistent storage? It&#039;s slow. durability and persistent.&lt;br /&gt;
&lt;br /&gt;
Going to make errors - we should be able to recover from them. Maybe not fix everything, but preserve most of the data. This is a huge burden on filesystems. Filesystem development tends to be very slow in practice. Whatever code you have doing this, has to do this right. Older filesystems tend to be more trustworthy, when hardware changes, bugs you didn&#039;t know where in the filesystem may come up.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
what do we have today? Indexed filesystems.&lt;br /&gt;
&lt;br /&gt;
There is typically a minimum storage allocation given to every file. That&#039;s the minimum size of a file, it&#039;s taking up 4 k / 8 k. this is not strictly true for all file systems. There was a filesystem that allowed arbitrarily sized files. (ReiserFS)&lt;br /&gt;
&lt;br /&gt;
Unifying a keyvalue store for smaller and larger filesystems wasn&#039;t considered a priority. &lt;br /&gt;
&lt;br /&gt;
Make filesystems do, on modern filesystem, rather than trying to optimize the storage of small files. It&#039;s not so much filesize that is the issue.&lt;br /&gt;
&lt;br /&gt;
Floppy disks vs. hard disks&lt;br /&gt;
&lt;br /&gt;
What is fast and what is slow?&lt;br /&gt;
&lt;br /&gt;
fast - reading what is under the drive head at any given time. As long as you keep the head there, you can read the entire concentric circle really fast. That&#039;s the fastest operation you are going to get?&lt;br /&gt;
&lt;br /&gt;
What&#039;s slow? Moving the head from one part of the disk to another - that&#039;s a slow operation.&lt;br /&gt;
&lt;br /&gt;
Intuitively, why is it that slow? you have to move it with extreme precision..&lt;br /&gt;
&lt;br /&gt;
moving data hard - seek time - the time it takes to move the drive head from one area to another on the disk.&lt;br /&gt;
&lt;br /&gt;
coordinate system goes by Cylinder Head Sector - the geometry of the disk. If there is data I want to access in parallel, I can optimize this by putting the data on different platters.&lt;br /&gt;
&lt;br /&gt;
How many heads there are, which of these, and which cylinder, and which sector, which is a count around.&lt;br /&gt;
&lt;br /&gt;
The classic IBM pc BIOS expects cylinders, heads and sectors. It&#039;s a lie, your systems get rid of that completely. now they use LBA - linear block addressing. What does that mean? you give the count to the block. When I talk about hard disk blocks, how does that compare to ram. A block is the smallest addressable unit of storage. What is the smallest addressable unit in ram - a byte.&lt;br /&gt;
&lt;br /&gt;
Older blocks could be 512 bytes. typical block sizes are 4k / 8k, when you do a transfer you read bytes in chunks. By blocks. On windows, common keyvalue store - the registry - a hierarchical keyvalue store for small keyvalues. it&#039;s like a filesystem, but every file is really small. It&#039;s stored in a file.&lt;br /&gt;
&lt;br /&gt;
hard disks aren&#039;t perfect - they have bad blocks. They can&#039;t actually store 1s and 0s. in order to encode 4 bits, you have to encode 7 bits. Weird issues in the physics of storing the data. Teh signals they are trying to read off the hard disk is kind of messy.&lt;br /&gt;
&lt;br /&gt;
Error correcting Codes&lt;br /&gt;
&lt;br /&gt;
hard disk - proprietary information to the hardware manufacturers.&lt;br /&gt;
&lt;br /&gt;
keys (filenames) -&amp;gt; mapped to values&lt;br /&gt;
&lt;br /&gt;
the keys are really in directories - directory data structures that are storing the names of the files.&lt;br /&gt;
&lt;br /&gt;
Random access for every block. Horrible on a hard drive, works well in RAM. Don&#039;t want to associate a list of blocks with every directory entry. Bad strategy. I have to do something better. What if I did something kind of like segment, a range of blocks? Modern systems normally do a list of extents (instead of doign a list of blocks) extents - list of segments instead of a list of blocks. The larger the file, the more blocks it&#039;s stored in. File is divided into one or more extents. This is exactly what I do not want to do in RAM. One simple way to do this? Have free space - have extra space on your harddisk. On a lot of filesystems - they will say they want to reserve a certain amount of space.&lt;br /&gt;
&lt;br /&gt;
It wants that extra space, so that it can make sure it gets that long string of blocks. Files will all become fragmented, what happens to filesystem performance? It goes down. It&#039;s trying to resist fragmentation.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_14&amp;diff=19433</id>
		<title>Operating Systems 2014F Lecture 14</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_14&amp;diff=19433"/>
		<updated>2014-11-05T14:50:05Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;How does an operating system know it&#039;s accessed memory it doesn&#039;t have access to? A lot of you said segments. &lt;br /&gt;
&lt;br /&gt;
Filesystems - normally operating system mechanisms are &lt;br /&gt;
&lt;br /&gt;
talking about access to hardware systems, persistent storage.&lt;br /&gt;
- abstraction for persistent storage. Storage that maintains it&#039;s state when it loses power.&lt;br /&gt;
A couple of challenges with persistent storage. What&#039;s weird about storing things in persistent storage? It&#039;s slow. durability and persistent.&lt;br /&gt;
&lt;br /&gt;
Going to make errors - we should be able to recover from them. Maybe not fix everything, but preserve most of the data. This is a huge burden on filesystems. Filesystem development tends to be very slow in practice. Whatever code you have doing this, has to do this right. Older filesystems tend to be more trustworthy, when hardware changes, bugs you didn&#039;t know where in the filesystem may come up.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
what do we have today? Indexed filesystems.&lt;br /&gt;
&lt;br /&gt;
There is typically a minimum storage allocation given to every file. That&#039;s the minimum size of a file, it&#039;s taking up 4 k / 8 k. this is not strictly true for all file systems. There was a filesystem that allowed arbitrarily sized files. (ReiserFS)&lt;br /&gt;
&lt;br /&gt;
Unifying a keyvalue store for smaller and larger filesystems wasn&#039;t considered a priority. &lt;br /&gt;
&lt;br /&gt;
Make filesystems do, on modern filesystem, rather than trying to optimize the storage of small files. It&#039;s not so much filesize that is the issue.&lt;br /&gt;
&lt;br /&gt;
Floppy disks vs. hard disks&lt;br /&gt;
&lt;br /&gt;
What is fast and what is slow?&lt;br /&gt;
&lt;br /&gt;
fast - reading what is under the drive head at any given time. As long as you keep the head there, you can read the entire concentric circle really fast. That&#039;s the fastest operation you are going to get?&lt;br /&gt;
&lt;br /&gt;
What&#039;s slow? Moving the head from one part of the disk to another - that&#039;s a slow operation.&lt;br /&gt;
&lt;br /&gt;
Intuitively, why is it that slow? you have to move it with extreme precision..&lt;br /&gt;
&lt;br /&gt;
moving data hard - seek time - the time it takes to move the drive head from one area to another on the disk.&lt;br /&gt;
&lt;br /&gt;
coordinate system goes by Cylinder Head Sector - the geometry of the disk. If there is data I want to access in parallel, I can optimize this by putting the data on different platters.&lt;br /&gt;
&lt;br /&gt;
How many heads there are, which of these, and which cylinder, and which sector, which is a count around.&lt;br /&gt;
&lt;br /&gt;
The classic IBM pc BIOS expects cylinders, heads and sectors. It&#039;s a lie, your systems get rid of that completely. now they use LBA - linear block addressing. What does that mean? you give the count to the block. When I talk about hard disk blocks, how does that compare to ram. A block is the smallest addressable unit of storage. What is the smallest addressable unit in ram - a byte.&lt;br /&gt;
&lt;br /&gt;
Older blocks could be 512 bytes. typical block sizes are 4k / 8k, when you do a transfer you read bytes in chunks. By blocks. On windows, common keyvalue store - the registry - a hierarchical keyvalue store for small keyvalues. it&#039;s like a filesystem, but every file is really small. It&#039;s stored in a file.&lt;br /&gt;
&lt;br /&gt;
hard disks aren&#039;t perfect - they have bad blocks. They can&#039;t actually store 1s and 0s. in order to encode 4 bits, you have to encode 7 bits. Weird issues in the physics of storing the data. Teh signals they are trying to read off the hard disk is kind of messy.&lt;br /&gt;
&lt;br /&gt;
Error correcting Codes&lt;br /&gt;
&lt;br /&gt;
hard disk - proprietary information to the hardware manufacturers.&lt;br /&gt;
&lt;br /&gt;
keys (filenames) -&amp;gt; mapped to values&lt;br /&gt;
&lt;br /&gt;
the keys are really in directories - directory data structures that are storing the names of the files.&lt;br /&gt;
&lt;br /&gt;
Random access for every block. Horrible on a hard drive, works well in RAM. Don&#039;t want to associate a list of blocks with every directory entry. Bad strategy. I have to do something better. What if I did something kind of like segment, a range of blocks? Modern systems normally do a list of extents (instead of doign a list of blocks) extents - list of segments instead of a list of blocks. The larger the file, the more blocks it&#039;s stored in. File is divided into one or more extents. This is exactly what I do not want to do in RAM. One simple way to do this? Have free space - have extra space on your harddisk. On a lot of filesystems - they will say they want to reserve a certain amount of space.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_14&amp;diff=19432</id>
		<title>Operating Systems 2014F Lecture 14</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_14&amp;diff=19432"/>
		<updated>2014-11-05T14:46:24Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;How does an operating system know it&#039;s accessed memory it doesn&#039;t have access to? A lot of you said segments. &lt;br /&gt;
&lt;br /&gt;
Filesystems - normally operating system mechanisms are &lt;br /&gt;
&lt;br /&gt;
talking about access to hardware systems, persistent storage.&lt;br /&gt;
- abstraction for persistent storage. Storage that maintains it&#039;s state when it loses power.&lt;br /&gt;
A couple of challenges with persistent storage. What&#039;s weird about storing things in persistent storage? It&#039;s slow. durability and persistent.&lt;br /&gt;
&lt;br /&gt;
Going to make errors - we should be able to recover from them. Maybe not fix everything, but preserve most of the data. This is a huge burden on filesystems. Filesystem development tends to be very slow in practice. Whatever code you have doing this, has to do this right. Older filesystems tend to be more trustworthy, when hardware changes, bugs you didn&#039;t know where in the filesystem may come up.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
what do we have today? Indexed filesystems.&lt;br /&gt;
&lt;br /&gt;
There is typically a minimum storage allocation given to every file. That&#039;s the minimum size of a file, it&#039;s taking up 4 k / 8 k. this is not strictly true for all file systems. There was a filesystem that allowed arbitrarily sized files. (ReiserFS)&lt;br /&gt;
&lt;br /&gt;
Unifying a keyvalue store for smaller and larger filesystems wasn&#039;t considered a priority. &lt;br /&gt;
&lt;br /&gt;
Make filesystems do, on modern filesystem, rather than trying to optimize the storage of small files. It&#039;s not so much filesize that is the issue.&lt;br /&gt;
&lt;br /&gt;
Floppy disks vs. hard disks&lt;br /&gt;
&lt;br /&gt;
What is fast and what is slow?&lt;br /&gt;
&lt;br /&gt;
fast - reading what is under the drive head at any given time. As long as you keep the head there, you can read the entire concentric circle really fast. That&#039;s the fastest operation you are going to get?&lt;br /&gt;
&lt;br /&gt;
What&#039;s slow? Moving the head from one part of the disk to another - that&#039;s a slow operation.&lt;br /&gt;
&lt;br /&gt;
Intuitively, why is it that slow? you have to move it with extreme precision..&lt;br /&gt;
&lt;br /&gt;
moving data hard - seek time - the time it takes to move the drive head from one area to another on the disk.&lt;br /&gt;
&lt;br /&gt;
coordinate system goes by Cylinder Head Sector - the geometry of the disk. If there is data I want to access in parallel, I can optimize this by putting the data on different platters.&lt;br /&gt;
&lt;br /&gt;
How many heads there are, which of these, and which cylinder, and which sector, which is a count around.&lt;br /&gt;
&lt;br /&gt;
The classic IBM pc BIOS expects cylinders, heads and sectors. It&#039;s a lie, your systems get rid of that completely. now they use LBA - linear block addressing. What does that mean? you give the count to the block. When I talk about hard disk blocks, how does that compare to ram. A block is the smallest addressable unit of storage. What is the smallest addressable unit in ram - a byte.&lt;br /&gt;
&lt;br /&gt;
Older blocks could be 512 bytes. typical block sizes are 4k / 8k, when you do a transfer you read bytes in chunks. By blocks. On windows, common keyvalue store - the registry - a hierarchical keyvalue store for small keyvalues. it&#039;s like a filesystem, but every file is really small. It&#039;s stored in a file.&lt;br /&gt;
&lt;br /&gt;
hard disks aren&#039;t perfect - they have bad blocks. They can&#039;t actually store 1s and 0s. in order to encode 4 bits, you have to encode 7 bits. Weird issues in the physics of storing the data. Teh signals they are trying to read off the hard disk is kind of messy.&lt;br /&gt;
&lt;br /&gt;
Error correcting Codes&lt;br /&gt;
&lt;br /&gt;
hard disk - proprietary information to the hardware manufacturers.&lt;br /&gt;
&lt;br /&gt;
keys (filenames) -&amp;gt; mapped to values&lt;br /&gt;
&lt;br /&gt;
the keys are really in directories - directory data structures that are storing the names of the files.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_14&amp;diff=19431</id>
		<title>Operating Systems 2014F Lecture 14</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_14&amp;diff=19431"/>
		<updated>2014-11-05T14:33:51Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;How does an operating system know it&#039;s accessed memory it doesn&#039;t have access to? A lot of you said segments. &lt;br /&gt;
&lt;br /&gt;
Filesystems - normally operating system mechanisms are &lt;br /&gt;
&lt;br /&gt;
talking about access to hardware systems, persistent storage.&lt;br /&gt;
- abstraction for persistent storage. Storage that maintains it&#039;s state when it loses power.&lt;br /&gt;
A couple of challenges with persistent storage. What&#039;s weird about storing things in persistent storage? It&#039;s slow. durability and persistent.&lt;br /&gt;
&lt;br /&gt;
Going to make errors - we should be able to recover from them. Maybe not fix everything, but preserve most of the data. This is a huge burden on filesystems. Filesystem development tends to be very slow in practice. Whatever code you have doing this, has to do this right. Older filesystems tend to be more trustworthy, when hardware changes, bugs you didn&#039;t know where in the filesystem may come up.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
what do we have today? Indexed filesystems.&lt;br /&gt;
&lt;br /&gt;
There is typically a minimum storage allocation given to every file. That&#039;s the minimum size of a file, it&#039;s taking up 4 k / 8 k. this is not strictly true for all file systems. There was a filesystem that allowed arbitrarily sized files. (ReiserFS)&lt;br /&gt;
&lt;br /&gt;
Unifying a keyvalue store for smaller and larger filesystems wasn&#039;t considered a priority. &lt;br /&gt;
&lt;br /&gt;
Make filesystems do, on modern filesystem, rather than trying to optimize the storage of small files. It&#039;s not so much filesize that is the issue.&lt;br /&gt;
&lt;br /&gt;
Floppy disks vs. hard disks&lt;br /&gt;
&lt;br /&gt;
What is fast and what is slow?&lt;br /&gt;
&lt;br /&gt;
fast - reading what is under the drive head at any given time. As long as you keep the head there, you can read the entire concentric circle really fast. That&#039;s the fastest operation you are going to get?&lt;br /&gt;
&lt;br /&gt;
What&#039;s slow? Moving the head from one part of the disk to another - that&#039;s a slow operation.&lt;br /&gt;
&lt;br /&gt;
Intuitively, why is it that slow? you have to move it with extreme precision..&lt;br /&gt;
&lt;br /&gt;
moving data hard - seek time - the time it takes to move the drive head from one area to another on the disk.&lt;br /&gt;
&lt;br /&gt;
coordinate system goes by Cylinder Head Sector&lt;br /&gt;
&lt;br /&gt;
How many heads there are, which of these, and which cylinder, and which sector, which is a count around.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_14&amp;diff=19430</id>
		<title>Operating Systems 2014F Lecture 14</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_14&amp;diff=19430"/>
		<updated>2014-11-05T14:19:07Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;How does an operating system know it&#039;s accessed memory it doesn&#039;t have access to? A lot of you said segments. &lt;br /&gt;
&lt;br /&gt;
Filesystems - normally operating system mechanisms are &lt;br /&gt;
&lt;br /&gt;
talking about access to hardware systems, persistent storage.&lt;br /&gt;
- abstraction for persistent storage. Storage that maintains it&#039;s state when it loses power.&lt;br /&gt;
A couple of challenges with persistent storage. What&#039;s weird about storing things in persistent storage? It&#039;s slow. durability and persistent.&lt;br /&gt;
&lt;br /&gt;
Going to make errors - we should be able to recover from them. Maybe not fix everything, but preserve most of the data. This is a huge burden on filesystems. Filesystem development tends to be very slow in practice. Whatever code you have doing this, has to do this right. Older filesystems tend to be more trustworthy, when hardware changes, bugs you didn&#039;t know where in the filesystem may come up.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
what do we have today? Indexed filesystems.&lt;br /&gt;
&lt;br /&gt;
There is typically a minimum storage allocation given to every file. That&#039;s the minimum size of a file, it&#039;s taking up 4 k / 8 k. this is not strictly true for all file systems. There was a filesystem that allowed arbitrarily sized files. (ReiserFS)&lt;br /&gt;
&lt;br /&gt;
Unifying a keyvalue store for smaller and larger filesystems wasn&#039;t considered a priority. &lt;br /&gt;
&lt;br /&gt;
Make filesystems do, on modern filesystem, rather than trying to optimize the storage of small files. It&#039;s not so much filesize that is the issue.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_14&amp;diff=19429</id>
		<title>Operating Systems 2014F Lecture 14</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_14&amp;diff=19429"/>
		<updated>2014-11-05T13:52:27Z</updated>

		<summary type="html">&lt;p&gt;Afry: Created page with &amp;quot;How does an operating system know it&amp;#039;s accessed memory it doesn&amp;#039;t have access to? A lot of you said segments.   Filesystems - normally operating system mechanisms are   talkin...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;How does an operating system know it&#039;s accessed memory it doesn&#039;t have access to? A lot of you said segments. &lt;br /&gt;
&lt;br /&gt;
Filesystems - normally operating system mechanisms are &lt;br /&gt;
&lt;br /&gt;
talking about access to hardware systems, persistent storage.&lt;br /&gt;
- abstraction for persistent storage. Storage that maintains it&#039;s state when it loses power.&lt;br /&gt;
A couple of challenges with persistent storage. What&#039;s weird about storing things in persistent storage? It&#039;s slow. durability and persistent.&lt;br /&gt;
&lt;br /&gt;
Going to make errors - we should be able to recover from them. Maybe not fix everything, but preserve most of the data. This is a huge burden on filesystems. Filesystem development tends to be very slow in practise. Whatever code you have doing this, has to do this right. Older filesystems tend to be more trustworthy, when hardware changes, bugs you didn&#039;t know where in the filesystem may come up.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19406</id>
		<title>Operating Systems 2014F Lecture 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19406"/>
		<updated>2014-10-10T13:57:50Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Dining Philosophers problem&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When can you have deadlock?&lt;br /&gt;
&lt;br /&gt;
4 conditions must apply&lt;br /&gt;
&lt;br /&gt;
- mutual exclusion&lt;br /&gt;
&lt;br /&gt;
- hold and wait - you can grab a lock and wait for the next one, you can spin / go to sleep or something. You dont&#039; just do things like try the lock if you are successful, and then continue with the computation.&lt;br /&gt;
&lt;br /&gt;
- no pre-emption (pre-emption is taking the resource by force.) you can only have deadlock when people are polite.&lt;br /&gt;
&lt;br /&gt;
- circular wait that&#039;s why the dining philosopher&#039;s problem has a circular table - have to have something that a)  is waiting on one another - that&#039;s what gets it into the problem. &lt;br /&gt;
&lt;br /&gt;
you break any of these, you can&#039;t have deadlock.&lt;br /&gt;
&lt;br /&gt;
When people talk about deadlock, they talk about strategies for avoiding it (for removing the problem) in terms of these strategies:&lt;br /&gt;
&lt;br /&gt;
 1 prevention - construct your system so that deadlock can never happen. (Make it impossible) Design your system such that one of these or more go away. &lt;br /&gt;
&lt;br /&gt;
let&#039;s say one thread has three locks to continue - whenever one goes to sleep, I&#039;ll take their chopstick and give it back to them ebfore they wake up and they&#039;ll never know the difference.&lt;br /&gt;
&lt;br /&gt;
 2 avoidance - prevention means you are making it impossible for this to happen. &lt;br /&gt;
&lt;br /&gt;
All four conditions are there in principle, but you can watch the unfolding of the computation, and you can notice when you are getting into a situation that can lead to deadlock, I can avoid it. Allocating resources such that you know that it&#039;s never going to happen. It&#039;s not necessarily prediction, where you lay a schedule for how everything operates. For example, let&#039;s say we are talking about car accidents - complete prevention - don&#039;t get into the car. Avoidance - you see something coming, you steer around it, or you have strategies like, stay within the lanes, don&#039;t go off the road.&lt;br /&gt;
&lt;br /&gt;
 3 - detect and recover - you had an accident - sorry , call the police, call the bodyshop - fix it up.&lt;br /&gt;
&lt;br /&gt;
in practice we mostly do detect and recover. you don&#039;t do all of them perfectly. Where watchdog timers come in? it&#039;s something that watches the system and then detects, an easy way of doing this ... let&#039;s say you have to call in to, a guard is going around, and when they are checking the perimeter, they have to check in periodically and say, ok that&#039;s fine, if someone was to attack the base, what would they do? they take out the guard. Then the signal wouldn&#039;t come in. Then you take steps to deal with it. &lt;br /&gt;
&lt;br /&gt;
A watchdog timer, is a separate processor, that is periodically sending it messages - normal interrupts to the system, if the OS is working properly and keeping sending messages back to the interrupt. but if you don&#039;t respond to the watchdog timer&#039;s request, it goes uh-oh and restarts the system. spontaneous reboot is performed to ensure the system keeps running. The assumption being that when you reboot, you come back to a working state.&lt;br /&gt;
&lt;br /&gt;
Two non-deadlock concurrency bugs:&lt;br /&gt;
- atomicity violations - you were supposed to lock it and you didn&#039;t, you were supposed to grab a lock and you didn&#039;t.&lt;br /&gt;
- order violation - you attempt to use something that hasn&#039;t been initialized, use before initialize&lt;br /&gt;
&lt;br /&gt;
TOCTTOU&lt;br /&gt;
&lt;br /&gt;
Time Of Check To Time Of Use&lt;br /&gt;
&lt;br /&gt;
race conditions - a particular class of them, in talking about memory accesses to a variable, we check the value of the variable, and we try to make a change to it&lt;br /&gt;
&lt;br /&gt;
temporary files - you  have a program running, in the middle of running, it&#039;s potentially useful to generate temporary files, (dump data in the middle of running) where do they often go? in a shared directory (/tmp), your own files&lt;br /&gt;
&lt;br /&gt;
when you run programs that are somehow priviledged (setuid / setgid programs) - when you normally runa program on a unix like system, it runs as you, ls - ls is running as a program. which means they can access any files that you own, they have the privileges that you have. When you need to run programs that need more access. Classic situations include: lpr or passwd&lt;br /&gt;
&lt;br /&gt;
passwd program allows you to change your password - a secure hash of your password is stored in a file called: /etc/passwd /etc/shadow&lt;br /&gt;
&lt;br /&gt;
These files, do you want any one else to be able to change these files? No  sometimes regular users want to change their password, so I have this file that I need to keep protected, sometimes I have to allow access to it. This is not an OO system. These are files. So how do I make sure that only certain code can modify that file. You would have some programs that when they ran, they didnt&#039; run with the privileges of the person who ran it, but with the privileges of the person who owns it. This password program we want to run it with extra privileges, we want to run it as root - to change those files. How do I denote this - there is a bit in the protections, there is also the setuid and setgid bit.If these bits are set, then the program runs with the user or group of how the file is own. The password program when you run it, it has the setuid bit set, and it&#039;s owned by root. When you run it, it&#039;s run by root. Since it is owned by root, it is run as root. you hope passwd doesn&#039;t have any bugs, otherwise it could corrupt your passwd file.&lt;br /&gt;
&lt;br /&gt;
Why am I talking about this now? Because these sort of programs, setuid programs, are particularly vulnerable to these TOCTTOU vulnerabilities. It will want to access a file, and if you are not careful, it will access the file with the wrong privileges.&lt;br /&gt;
&lt;br /&gt;
With what privileges does /bin/passwd - it runs w/ root privileges - but it has this command line option that lets it modify any file. So if you aren&#039;t careful you could use this to modify arbitrary files on the system. Well we should place some restrictions, what do you place on it ? you have it check the owner and group of the file, and the owner and group of the person who invoked the program. It will only let me modify files that are owned by that user. Standard check, but how does it know a file is owned by that user, it has to check (a system call to ask for the inode). It has to query the filesystem and ask what&#039;s that file, who is it owned by? What if I change the file after it&#039;s done the check but before it&#039;s modified the file.&lt;br /&gt;
&lt;br /&gt;
Symbolic link - treat it as a password file, you replace it quickly and have it point to the real password file. If I win the race, in between the time it checks and the time it modifies the file, I can get in there, and do my damage. Particular problem at the filesystem level. All the stuff we talked about with locks, can&#039;t apply to files. It&#039;s harder to do atomic operations on files. &lt;br /&gt;
&lt;br /&gt;
If you are programming a system and you are messing with temp files, make sure you use their mechanisms for doing secure temporary file allocation. One of the ways to do this, is it creates a very hard name to guess for your temporary files, it&#039;s hard for them to mess with it. Is it perfect no? but it protects you against these types of attacks. A lot of the built in programs to create temporary files claim they are secure, but they are not.&lt;br /&gt;
&lt;br /&gt;
You know that a privileged program is going to mess with the file, it&#039;s doing a check for something, it&#039;s going to do it, and then it&#039;s going to modify it. You setup your race so that you wait for it to do it&#039;s check, once that check is done, make sure it passes, and then swap it around. How can I win that? I can potentially do a fork bomb, slow the system down. To win the race.&lt;br /&gt;
&lt;br /&gt;
Switch gears, start thinking about TLBs. TLBs is the probably the most annoying term in operating systems - Translation Lookaside Buffer. When you are going from virtual to physical addresses. We do this on not a per address basis, but a per page basis. It&#039;s not actually a table, it&#039;s a tree with a couple of levels. But the important thing to realize, is that it&#039;s a big complex data structure that you can&#039;t access on every data access. It&#039;s too slow, why is it too slow? In order to access a big data structure, you have to access multiple locations in memory. The thing you are trying to do quickly, you are having to do many operations that are much slower in order to do the fast operation. The TLB is a cache of virtual to physical mappings. The smallest cache on the chip. question is why is it small? It&#039;s really weird. What&#039;s the operation I want to do on this cache. I want to give it a virtual address,a nd I want it to give me the corresponding physical address. how can you do a really fast table lookup? &lt;br /&gt;
&lt;br /&gt;
1 have the table sorted, you go through and see where in the table it is and you find it, is a binary search of the table faster &lt;br /&gt;
2 what&#039;s a data structure that is a key value mapping - hash map - constant time - a hash table isn&#039;t fast enough for this&lt;br /&gt;
so instead of that, the fastest way to do a key value resolution - a content addressable memory. how our brains work - when you want to find something in memory, what&#039;s your way of accessing ram? It&#039;s by the address - RAM address accessible - ok I want whataver the value is at  location 2000, you get that value. do you find anything else in memory, no you find out that particular thing. A TLB instead has all these entries. (each is a page table entry)&lt;br /&gt;
&lt;br /&gt;
[[File:TLB.png]]&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:TLB.png&amp;diff=19405</id>
		<title>File:TLB.png</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:TLB.png&amp;diff=19405"/>
		<updated>2014-10-10T13:57:19Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19404</id>
		<title>Operating Systems 2014F Lecture 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19404"/>
		<updated>2014-10-10T13:57:04Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Dining Philosophers problem&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When can you have deadlock?&lt;br /&gt;
&lt;br /&gt;
4 conditions must apply&lt;br /&gt;
&lt;br /&gt;
- mutual exclusion&lt;br /&gt;
&lt;br /&gt;
- hold and wait - you can grab a lock and wait for the next one, you can spin / go to sleep or something. You dont&#039; just do things like try the lock if you are successful, and then continue with the computation.&lt;br /&gt;
&lt;br /&gt;
- no pre-emption (pre-emption is taking the resource by force.) you can only have deadlock when people are polite.&lt;br /&gt;
&lt;br /&gt;
- circular wait that&#039;s why the dining philosopher&#039;s problem has a circular table - have to have something that a)  is waiting on one another - that&#039;s what gets it into the problem. &lt;br /&gt;
&lt;br /&gt;
you break any of these, you can&#039;t have deadlock.&lt;br /&gt;
&lt;br /&gt;
When people talk about deadlock, they talk about strategies for avoiding it (for removing the problem) in terms of these strategies:&lt;br /&gt;
&lt;br /&gt;
 1 prevention - construct your system so that deadlock can never happen. (Make it impossible) Design your system such that one of these or more go away. &lt;br /&gt;
&lt;br /&gt;
let&#039;s say one thread has three locks to continue - whenever one goes to sleep, I&#039;ll take their chopstick and give it back to them ebfore they wake up and they&#039;ll never know the difference.&lt;br /&gt;
&lt;br /&gt;
 2 avoidance - prevention means you are making it impossible for this to happen. &lt;br /&gt;
&lt;br /&gt;
All four conditions are there in principle, but you can watch the unfolding of the computation, and you can notice when you are getting into a situation that can lead to deadlock, I can avoid it. Allocating resources such that you know that it&#039;s never going to happen. It&#039;s not necessarily prediction, where you lay a schedule for how everything operates. For example, let&#039;s say we are talking about car accidents - complete prevention - don&#039;t get into the car. Avoidance - you see something coming, you steer around it, or you have strategies like, stay within the lanes, don&#039;t go off the road.&lt;br /&gt;
&lt;br /&gt;
 3 - detect and recover - you had an accident - sorry , call the police, call the bodyshop - fix it up.&lt;br /&gt;
&lt;br /&gt;
in practice we mostly do detect and recover. you don&#039;t do all of them perfectly. Where watchdog timers come in? it&#039;s something that watches the system and then detects, an easy way of doing this ... let&#039;s say you have to call in to, a guard is going around, and when they are checking the perimeter, they have to check in periodically and say, ok that&#039;s fine, if someone was to attack the base, what would they do? they take out the guard. Then the signal wouldn&#039;t come in. Then you take steps to deal with it. &lt;br /&gt;
&lt;br /&gt;
A watchdog timer, is a separate processor, that is periodically sending it messages - normal interrupts to the system, if the OS is working properly and keeping sending messages back to the interrupt. but if you don&#039;t respond to the watchdog timer&#039;s request, it goes uh-oh and restarts the system. spontaneous reboot is performed to ensure the system keeps running. The assumption being that when you reboot, you come back to a working state.&lt;br /&gt;
&lt;br /&gt;
Two non-deadlock concurrency bugs:&lt;br /&gt;
- atomicity violations - you were supposed to lock it and you didn&#039;t, you were supposed to grab a lock and you didn&#039;t.&lt;br /&gt;
- order violation - you attempt to use something that hasn&#039;t been initialized, use before initialize&lt;br /&gt;
&lt;br /&gt;
TOCTTOU&lt;br /&gt;
&lt;br /&gt;
Time Of Check To Time Of Use&lt;br /&gt;
&lt;br /&gt;
race conditions - a particular class of them, in talking about memory accesses to a variable, we check the value of the variable, and we try to make a change to it&lt;br /&gt;
&lt;br /&gt;
temporary files - you  have a program running, in the middle of running, it&#039;s potentially useful to generate temporary files, (dump data in the middle of running) where do they often go? in a shared directory (/tmp), your own files&lt;br /&gt;
&lt;br /&gt;
when you run programs that are somehow priviledged (setuid / setgid programs) - when you normally runa program on a unix like system, it runs as you, ls - ls is running as a program. which means they can access any files that you own, they have the privileges that you have. When you need to run programs that need more access. Classic situations include: lpr or passwd&lt;br /&gt;
&lt;br /&gt;
passwd program allows you to change your password - a secure hash of your password is stored in a file called: /etc/passwd /etc/shadow&lt;br /&gt;
&lt;br /&gt;
These files, do you want any one else to be able to change these files? No  sometimes regular users want to change their password, so I have this file that I need to keep protected, sometimes I have to allow access to it. This is not an OO system. These are files. So how do I make sure that only certain code can modify that file. You would have some programs that when they ran, they didnt&#039; run with the privileges of the person who ran it, but with the privileges of the person who owns it. This password program we want to run it with extra privileges, we want to run it as root - to change those files. How do I denote this - there is a bit in the protections, there is also the setuid and setgid bit.If these bits are set, then the program runs with the user or group of how the file is own. The password program when you run it, it has the setuid bit set, and it&#039;s owned by root. When you run it, it&#039;s run by root. Since it is owned by root, it is run as root. you hope passwd doesn&#039;t have any bugs, otherwise it could corrupt your passwd file.&lt;br /&gt;
&lt;br /&gt;
Why am I talking about this now? Because these sort of programs, setuid programs, are particularly vulnerable to these TOCTTOU vulnerabilities. It will want to access a file, and if you are not careful, it will access the file with the wrong privileges.&lt;br /&gt;
&lt;br /&gt;
With what privileges does /bin/passwd - it runs w/ root privileges - but it has this command line option that lets it modify any file. So if you aren&#039;t careful you could use this to modify arbitrary files on the system. Well we should place some restrictions, what do you place on it ? you have it check the owner and group of the file, and the owner and group of the person who invoked the program. It will only let me modify files that are owned by that user. Standard check, but how does it know a file is owned by that user, it has to check (a system call to ask for the inode). It has to query the filesystem and ask what&#039;s that file, who is it owned by? What if I change the file after it&#039;s done the check but before it&#039;s modified the file.&lt;br /&gt;
&lt;br /&gt;
Symbolic link - treat it as a password file, you replace it quickly and have it point to the real password file. If I win the race, in between the time it checks and the time it modifies the file, I can get in there, and do my damage. Particular problem at the filesystem level. All the stuff we talked about with locks, can&#039;t apply to files. It&#039;s harder to do atomic operations on files. &lt;br /&gt;
&lt;br /&gt;
If you are programming a system and you are messing with temp files, make sure you use their mechanisms for doing secure temporary file allocation. One of the ways to do this, is it creates a very hard name to guess for your temporary files, it&#039;s hard for them to mess with it. Is it perfect no? but it protects you against these types of attacks. A lot of the built in programs to create temporary files claim they are secure, but they are not.&lt;br /&gt;
&lt;br /&gt;
You know that a privileged program is going to mess with the file, it&#039;s doing a check for something, it&#039;s going to do it, and then it&#039;s going to modify it. You setup your race so that you wait for it to do it&#039;s check, once that check is done, make sure it passes, and then swap it around. How can I win that? I can potentially do a fork bomb, slow the system down. To win the race.&lt;br /&gt;
&lt;br /&gt;
Switch gears, start thinking about TLBs. TLBs is the probably the most annoying term in operating systems - Translation Lookaside Buffer. When you are going from virtual to physical addresses. We do this on not a per address basis, but a per page basis. It&#039;s not actually a table, it&#039;s a tree with a couple of levels. But the important thing to realize, is that it&#039;s a big complex data structure that you can&#039;t access on every data access. It&#039;s too slow, why is it too slow? In order to access a big data structure, you have to access multiple locations in memory. The thing you are trying to do quickly, you are having to do many operations that are much slower in order to do the fast operation. The TLB is a cache of virtual to physical mappings. The smallest cache on the chip. question is why is it small? It&#039;s really weird. What&#039;s the operation I want to do on this cache. I want to give it a virtual address,a nd I want it to give me the corresponding physical address. how can you do a really fast table lookup? &lt;br /&gt;
&lt;br /&gt;
1 have the table sorted, you go through and see where in the table it is and you find it, is a binary search of the table faster &lt;br /&gt;
2 what&#039;s a data structure that is a key value mapping - hash map - constant time - a hash table isn&#039;t fast enough for this&lt;br /&gt;
so instead of that, the fastest way to do a key value resolution - a content addressable memory. how our brains work - when you want to find something in memory, what&#039;s your way of accessing ram? It&#039;s by the address - RAM address accessible - ok I want whataver the value is at  location 2000, you get that value. do you find anything else in memory, no you find out that particular thing. A TLB instead has all these entries. (each is a page table entry)&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19403</id>
		<title>Operating Systems 2014F Lecture 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19403"/>
		<updated>2014-10-10T13:45:16Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Dining Philosophers problem&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When can you have deadlock?&lt;br /&gt;
&lt;br /&gt;
4 conditions must apply&lt;br /&gt;
&lt;br /&gt;
- mutual exclusion&lt;br /&gt;
&lt;br /&gt;
- hold and wait - you can grab a lock and wait for the next one, you can spin / go to sleep or something. You dont&#039; just do things like try the lock if you are successful, and then continue with the computation.&lt;br /&gt;
&lt;br /&gt;
- no pre-emption (pre-emption is taking the resource by force.) you can only have deadlock when people are polite.&lt;br /&gt;
&lt;br /&gt;
- circular wait that&#039;s why the dining philosopher&#039;s problem has a circular table - have to have something that a)  is waiting on one another - that&#039;s what gets it into the problem. &lt;br /&gt;
&lt;br /&gt;
you break any of these, you can&#039;t have deadlock.&lt;br /&gt;
&lt;br /&gt;
When people talk about deadlock, they talk about strategies for avoiding it (for removing the problem) in terms of these strategies:&lt;br /&gt;
&lt;br /&gt;
 1 prevention - construct your system so that deadlock can never happen. (Make it impossible) Design your system such that one of these or more go away. &lt;br /&gt;
&lt;br /&gt;
let&#039;s say one thread has three locks to continue - whenever one goes to sleep, I&#039;ll take their chopstick and give it back to them ebfore they wake up and they&#039;ll never know the difference.&lt;br /&gt;
&lt;br /&gt;
 2 avoidance - prevention means you are making it impossible for this to happen. &lt;br /&gt;
&lt;br /&gt;
All four conditions are there in principle, but you can watch the unfolding of the computation, and you can notice when you are getting into a situation that can lead to deadlock, I can avoid it. Allocating resources such that you know that it&#039;s never going to happen. It&#039;s not necessarily prediction, where you lay a schedule for how everything operates. For example, let&#039;s say we are talking about car accidents - complete prevention - don&#039;t get into the car. Avoidance - you see something coming, you steer around it, or you have strategies like, stay within the lanes, don&#039;t go off the road.&lt;br /&gt;
&lt;br /&gt;
 3 - detect and recover - you had an accident - sorry , call the police, call the bodyshop - fix it up.&lt;br /&gt;
&lt;br /&gt;
in practice we mostly do detect and recover. you don&#039;t do all of them perfectly. Where watchdog timers come in? it&#039;s something that watches the system and then detects, an easy way of doing this ... let&#039;s say you have to call in to, a guard is going around, and when they are checking the perimeter, they have to check in periodically and say, ok that&#039;s fine, if someone was to attack the base, what would they do? they take out the guard. Then the signal wouldn&#039;t come in. Then you take steps to deal with it. &lt;br /&gt;
&lt;br /&gt;
A watchdog timer, is a separate processor, that is periodically sending it messages - normal interrupts to the system, if the OS is working properly and keeping sending messages back to the interrupt. but if you don&#039;t respond to the watchdog timer&#039;s request, it goes uh-oh and restarts the system. spontaneous reboot is performed to ensure the system keeps running. The assumption being that when you reboot, you come back to a working state.&lt;br /&gt;
&lt;br /&gt;
Two non-deadlock concurrency bugs:&lt;br /&gt;
- atomicity violations - you were supposed to lock it and you didn&#039;t, you were supposed to grab a lock and you didn&#039;t.&lt;br /&gt;
- order violation - you attempt to use something that hasn&#039;t been initialized, use before initialize&lt;br /&gt;
&lt;br /&gt;
TOCTTOU&lt;br /&gt;
&lt;br /&gt;
Time Of Check To Time Of Use&lt;br /&gt;
&lt;br /&gt;
race conditions - a particular class of them, in talking about memory accesses to a variable, we check the value of the variable, and we try to make a change to it&lt;br /&gt;
&lt;br /&gt;
temporary files - you  have a program running, in the middle of running, it&#039;s potentially useful to generate temporary files, (dump data in the middle of running) where do they often go? in a shared directory (/tmp), your own files&lt;br /&gt;
&lt;br /&gt;
when you run programs that are somehow priviledged (setuid / setgid programs) - when you normally runa program on a unix like system, it runs as you, ls - ls is running as a program. which means they can access any files that you own, they have the privileges that you have. When you need to run programs that need more access. Classic situations include: lpr or passwd&lt;br /&gt;
&lt;br /&gt;
passwd program allows you to change your password - a secure hash of your password is stored in a file called: /etc/passwd /etc/shadow&lt;br /&gt;
&lt;br /&gt;
These files, do you want any one else to be able to change these files? No  sometimes regular users want to change their password, so I have this file that I need to keep protected, sometimes I have to allow access to it. This is not an OO system. These are files. So how do I make sure that only certain code can modify that file. You would have some programs that when they ran, they didnt&#039; run with the privileges of the person who ran it, but with the privileges of the person who owns it. This password program we want to run it with extra privileges, we want to run it as root - to change those files. How do I denote this - there is a bit in the protections, there is also the setuid and setgid bit.If these bits are set, then the program runs with the user or group of how the file is own. The password program when you run it, it has the setuid bit set, and it&#039;s owned by root. When you run it, it&#039;s run by root. Since it is owned by root, it is run as root. you hope passwd doesn&#039;t have any bugs, otherwise it could corrupt your passwd file.&lt;br /&gt;
&lt;br /&gt;
Why am I talking about this now? Because these sort of programs, setuid programs, are particularly vulnerable to these TOCTTOU vulnerabilities. It will want to access a file, and if you are not careful, it will access the file with the wrong privileges.&lt;br /&gt;
&lt;br /&gt;
With what privileges does /bin/passwd - it runs w/ root privileges - but it has this command line option that lets it modify any file. So if you aren&#039;t careful you could use this to modify arbitrary files on the system. Well we should place some restrictions, what do you place on it ? you have it check the owner and group of the file, and the owner and group of the person who invoked the program. It will only let me modify files that are owned by that user. Standard check, but how does it know a file is owned by that user, it has to check (a system call to ask for the inode). It has to query the filesystem and ask what&#039;s that file, who is it owned by? What if I change the file after it&#039;s done the check but before it&#039;s modified the file.&lt;br /&gt;
&lt;br /&gt;
Symbolic link - treat it as a password file, you replace it quickly and have it point to the real password file. If I win the race, in between the time it checks and the time it modifies the file, I can get in there, and do my damage. Particular problem at the filesystem level. All the stuff we talked about with locks, can&#039;t apply to files. It&#039;s harder to do atomic operations on files. &lt;br /&gt;
&lt;br /&gt;
If you are programming a system and you are messing with temp files, make sure you use their mechanisms for doing secure temporary file allocation. One of the ways to do this, is it creates a very hard name to guess for your temporary files, it&#039;s hard for them to mess with it. Is it perfect no? but it protects you against these types of attacks. A lot of the built in programs to create temporary files claim they are secure, but they are not.&lt;br /&gt;
&lt;br /&gt;
You know that a privileged program is going to mess with the file, it&#039;s doing a check for something, it&#039;s going to do it, and then it&#039;s going to modify it. You setup your race so that you wait for it to do it&#039;s check, once that check is done, make sure it passes, and then swap it around. How can I win that? I can potentially do a fork bomb, slow the system down. To win the race.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19402</id>
		<title>Operating Systems 2014F Lecture 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19402"/>
		<updated>2014-10-10T13:43:24Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Dining Philosophers problem&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When can you have deadlock?&lt;br /&gt;
&lt;br /&gt;
4 conditions must apply&lt;br /&gt;
&lt;br /&gt;
- mutual exclusion&lt;br /&gt;
&lt;br /&gt;
- hold and wait - you can grab a lock and wait for the next one, you can spin / go to sleep or something. You dont&#039; just do things like try the lock if you are successful, and then continue with the computation.&lt;br /&gt;
&lt;br /&gt;
- no pre-emption (pre-emption is taking the resource by force.) you can only have deadlock when people are polite.&lt;br /&gt;
&lt;br /&gt;
- circular wait that&#039;s why the dining philosopher&#039;s problem has a circular table - have to have something that a)  is waiting on one another - that&#039;s what gets it into the problem. &lt;br /&gt;
&lt;br /&gt;
you break any of these, you can&#039;t have deadlock.&lt;br /&gt;
&lt;br /&gt;
When people talk about deadlock, they talk about strategies for avoiding it (for removing the problem) in terms of these strategies:&lt;br /&gt;
&lt;br /&gt;
 1 prevention - construct your system so that deadlock can never happen. (Make it impossible) Design your system such that one of these or more go away. &lt;br /&gt;
&lt;br /&gt;
let&#039;s say one thread has three locks to continue - whenever one goes to sleep, I&#039;ll take their chopstick and give it back to them ebfore they wake up and they&#039;ll never know the difference.&lt;br /&gt;
&lt;br /&gt;
 2 avoidance - prevention means you are making it impossible for this to happen. &lt;br /&gt;
&lt;br /&gt;
All four conditions are there in principle, but you can watch the unfolding of the computation, and you can notice when you are getting into a situation that can lead to deadlock, I can avoid it. Allocating resources such that you know that it&#039;s never going to happen. It&#039;s not necessarily prediction, where you lay a schedule for how everything operates. For example, let&#039;s say we are talking about car accidents - complete prevention - don&#039;t get into the car. Avoidance - you see something coming, you steer around it, or you have strategies like, stay within the lanes, don&#039;t go off the road.&lt;br /&gt;
&lt;br /&gt;
 3 - detect and recover - you had an accident - sorry , call the police, call the bodyshop - fix it up.&lt;br /&gt;
&lt;br /&gt;
in practice we mostly do detect and recover. you don&#039;t do all of them perfectly. Where watchdog timers come in? it&#039;s something that watches the system and then detects, an easy way of doing this ... let&#039;s say you have to call in to, a guard is going around, and when they are checking the perimeter, they have to check in periodically and say, ok that&#039;s fine, if someone was to attack the base, what would they do? they take out the guard. Then the signal wouldn&#039;t come in. Then you take steps to deal with it. &lt;br /&gt;
&lt;br /&gt;
A watchdog timer, is a separate processor, that is periodically sending it messages - normal interrupts to the system, if the OS is working properly and keeping sending messages back to the interrupt. but if you don&#039;t respond to the watchdog timer&#039;s request, it goes uh-oh and restarts the system. spontaneous reboot is performed to ensure the system keeps running. The assumption being that when you reboot, you come back to a working state.&lt;br /&gt;
&lt;br /&gt;
Two non-deadlock concurrency bugs:&lt;br /&gt;
- atomicity violations - you were supposed to lock it and you didn&#039;t, you were supposed to grab a lock and you didn&#039;t.&lt;br /&gt;
- order violation - you attempt to use something that hasn&#039;t been initialized, use before initialize&lt;br /&gt;
&lt;br /&gt;
TOCTTOU&lt;br /&gt;
&lt;br /&gt;
Time Of Check To Time Of Use&lt;br /&gt;
&lt;br /&gt;
race conditions - a particular class of them, in talking about memory accesses to a variable, we check the value of the variable, and we try to make a change to it&lt;br /&gt;
&lt;br /&gt;
temporary files - you  have a program running, in the middle of running, it&#039;s potentially useful to generate temporary files, (dump data in the middle of running) where do they often go? in a shared directory (/tmp), your own files&lt;br /&gt;
&lt;br /&gt;
when you run programs that are somehow priviledged (setuid / setgid programs) - when you normally runa program on a unix like system, it runs as you, ls - ls is running as a program. which means they can access any files that you own, they have the privileges that you have. When you need to run programs that need more access. Classic situations include: lpr or passwd&lt;br /&gt;
&lt;br /&gt;
passwd program allows you to change your password - a secure hash of your password is stored in a file called: /etc/passwd /etc/shadow&lt;br /&gt;
&lt;br /&gt;
These files, do you want any one else to be able to change these files? No  sometimes regular users want to change their password, so I have this file that I need to keep protected, sometimes I have to allow access to it. This is not an OO system. These are files. So how do I make sure that only certain code can modify that file. You would have some programs that when they ran, they didnt&#039; run with the privileges of the person who ran it, but with the privileges of the person who owns it. This password program we want to run it with extra privileges, we want to run it as root - to change those files. How do I denote this - there is a bit in the protections, there is also the setuid and setgid bit.If these bits are set, then the program runs with the user or group of how the file is own. The password program when you run it, it has the setuid bit set, and it&#039;s owned by root. When you run it, it&#039;s run by root. Since it is owned by root, it is run as root. you hope passwd doesn&#039;t have any bugs, otherwise it could corrupt your passwd file.&lt;br /&gt;
&lt;br /&gt;
Why am I talking about this now? Because these sort of programs, setuid programs, are particularly vulnerable to these TOCTTOU vulnerabilities. It will want to access a file, and if you are not careful, it will access the file with the wrong privileges.&lt;br /&gt;
&lt;br /&gt;
With what privileges does /bin/passwd - it runs w/ root privileges - but it has this command line option that lets it modify any file. So if you aren&#039;t careful you could use this to modify arbitrary files on the system. Well we should place some restrictions, what do you place on it ? you have it check the owner and group of the file, and the owner and group of the person who invoked the program. It will only let me modify files that are owned by that user. Standard check, but how does it know a file is owned by that user, it has to check (a system call to ask for the inode). It has to query the filesystem and ask what&#039;s that file, who is it owned by? What if I change the file after it&#039;s done the check but before it&#039;s modified the file.&lt;br /&gt;
&lt;br /&gt;
Symbolic link - treat it as a password file, you replace it quickly and have it point to the real password file. If I win the race, in between the time it checks and the time it modifies the file, I can get in there, and do my damage. Particular problem at the filesystem level. All the stuff we talked about with locks, can&#039;t apply to files. It&#039;s harder to do atomic operations on files. &lt;br /&gt;
&lt;br /&gt;
If you are programming a system and you are messing with temp files, make sure you use their mechanisms for doing secure temporary file allocation. One of the ways to do this, is it creates a very hard name to guess for your temporary files, it&#039;s hard for them to mess with it. Is it perfect no? but it protects you against these types of attacks. A lot of the built in programs to create temporary files claim they are secure, but they are not.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19401</id>
		<title>Operating Systems 2014F Lecture 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19401"/>
		<updated>2014-10-10T13:40:18Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Dining Philosophers problem&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When can you have deadlock?&lt;br /&gt;
&lt;br /&gt;
4 conditions must apply&lt;br /&gt;
&lt;br /&gt;
- mutual exclusion&lt;br /&gt;
&lt;br /&gt;
- hold and wait - you can grab a lock and wait for the next one, you can spin / go to sleep or something. You dont&#039; just do things like try the lock if you are successful, and then continue with the computation.&lt;br /&gt;
&lt;br /&gt;
- no pre-emption (pre-emption is taking the resource by force.) you can only have deadlock when people are polite.&lt;br /&gt;
&lt;br /&gt;
- circular wait that&#039;s why the dining philosopher&#039;s problem has a circular table - have to have something that a)  is waiting on one another - that&#039;s what gets it into the problem. &lt;br /&gt;
&lt;br /&gt;
you break any of these, you can&#039;t have deadlock.&lt;br /&gt;
&lt;br /&gt;
When people talk about deadlock, they talk about strategies for avoiding it (for removing the problem) in terms of these strategies:&lt;br /&gt;
&lt;br /&gt;
 1 prevention - construct your system so that deadlock can never happen. (Make it impossible) Design your system such that one of these or more go away. &lt;br /&gt;
&lt;br /&gt;
let&#039;s say one thread has three locks to continue - whenever one goes to sleep, I&#039;ll take their chopstick and give it back to them ebfore they wake up and they&#039;ll never know the difference.&lt;br /&gt;
&lt;br /&gt;
 2 avoidance - prevention means you are making it impossible for this to happen. &lt;br /&gt;
&lt;br /&gt;
All four conditions are there in principle, but you can watch the unfolding of the computation, and you can notice when you are getting into a situation that can lead to deadlock, I can avoid it. Allocating resources such that you know that it&#039;s never going to happen. It&#039;s not necessarily prediction, where you lay a schedule for how everything operates. For example, let&#039;s say we are talking about car accidents - complete prevention - don&#039;t get into the car. Avoidance - you see something coming, you steer around it, or you have strategies like, stay within the lanes, don&#039;t go off the road.&lt;br /&gt;
&lt;br /&gt;
 3 - detect and recover - you had an accident - sorry , call the police, call the bodyshop - fix it up.&lt;br /&gt;
&lt;br /&gt;
in practice we mostly do detect and recover. you don&#039;t do all of them perfectly. Where watchdog timers come in? it&#039;s something that watches the system and then detects, an easy way of doing this ... let&#039;s say you have to call in to, a guard is going around, and when they are checking the perimeter, they have to check in periodically and say, ok that&#039;s fine, if someone was to attack the base, what would they do? they take out the guard. Then the signal wouldn&#039;t come in. Then you take steps to deal with it. &lt;br /&gt;
&lt;br /&gt;
A watchdog timer, is a separate processor, that is periodically sending it messages - normal interrupts to the system, if the OS is working properly and keeping sending messages back to the interrupt. but if you don&#039;t respond to the watchdog timer&#039;s request, it goes uh-oh and restarts the system. spontaneous reboot is performed to ensure the system keeps running. The assumption being that when you reboot, you come back to a working state.&lt;br /&gt;
&lt;br /&gt;
Two non-deadlock concurrency bugs:&lt;br /&gt;
- atomicity violations - you were supposed to lock it and you didn&#039;t, you were supposed to grab a lock and you didn&#039;t.&lt;br /&gt;
- order violation - you attempt to use something that hasn&#039;t been initialized, use before initialize&lt;br /&gt;
&lt;br /&gt;
TOCTTOU&lt;br /&gt;
&lt;br /&gt;
Time Of Check To Time Of Use&lt;br /&gt;
&lt;br /&gt;
race conditions - a particular class of them, in talking about memory accesses to a variable, we check the value of the variable, and we try to make a change to it&lt;br /&gt;
&lt;br /&gt;
temporary files - you  have a program running, in the middle of running, it&#039;s potentially useful to generate temporary files, (dump data in the middle of running) where do they often go? in a shared directory (/tmp), your own files&lt;br /&gt;
&lt;br /&gt;
when you run programs that are somehow priviledged (setuid / setgid programs) - when you normally runa program on a unix like system, it runs as you, ls - ls is running as a program. which means they can access any files that you own, they have the privileges that you have. When you need to run programs that need more access. Classic situations include: lpr or passwd&lt;br /&gt;
&lt;br /&gt;
passwd program allows you to change your password - a secure hash of your password is stored in a file called: /etc/passwd /etc/shadow&lt;br /&gt;
&lt;br /&gt;
These files, do you want any one else to be able to change these files? No  sometimes regular users want to change their password, so I have this file that I need to keep protected, sometimes I have to allow access to it. This is not an OO system. These are files. So how do I make sure that only certain code can modify that file. You would have some programs that when they ran, they didnt&#039; run with the privileges of the person who ran it, but with the privileges of the person who owns it. This password program we want to run it with extra privileges, we want to run it as root - to change those files. How do I denote this - there is a bit in the protections, there is also the setuid and setgid bit.If these bits are set, then the program runs with the user or group of how the file is own. The password program when you run it, it has the setuid bit set, and it&#039;s owned by root. When you run it, it&#039;s run by root. Since it is owned by root, it is run as root. you hope passwd doesn&#039;t have any bugs, otherwise it could corrupt your passwd file.&lt;br /&gt;
&lt;br /&gt;
Why am I talking about this now? Because these sort of programs, setuid programs, are particularly vulnerable to these TOCTTOU vulnerabilities. It will want to access a file, and if you are not careful, it will access the file with the wrong privileges.&lt;br /&gt;
&lt;br /&gt;
With what privileges does /bin/passwd - it runs w/ root privileges - but it has this command line option that lets it modify any file. So if you aren&#039;t careful you could use this to modify arbitrary files on the system. Well we should place some restrictions, what do you place on it ? you have it check the owner and group of the file, and the owner and group of the person who invoked the program. It will only let me modify files that are owned by that user. Standard check, but how does it know a file is owned by that user, it has to check (a system call to ask for the inode). It has to query the filesystem and ask what&#039;s that file, who is it owned by? What if I change the file after it&#039;s done the check but before it&#039;s modified the file.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19400</id>
		<title>Operating Systems 2014F Lecture 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19400"/>
		<updated>2014-10-10T13:37:27Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Dining Philosophers problem&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When can you have deadlock?&lt;br /&gt;
&lt;br /&gt;
4 conditions must apply&lt;br /&gt;
&lt;br /&gt;
- mutual exclusion&lt;br /&gt;
&lt;br /&gt;
- hold and wait - you can grab a lock and wait for the next one, you can spin / go to sleep or something. You dont&#039; just do things like try the lock if you are successful, and then continue with the computation.&lt;br /&gt;
&lt;br /&gt;
- no pre-emption (pre-emption is taking the resource by force.) you can only have deadlock when people are polite.&lt;br /&gt;
&lt;br /&gt;
- circular wait that&#039;s why the dining philosopher&#039;s problem has a circular table - have to have something that a)  is waiting on one another - that&#039;s what gets it into the problem. &lt;br /&gt;
&lt;br /&gt;
you break any of these, you can&#039;t have deadlock.&lt;br /&gt;
&lt;br /&gt;
When people talk about deadlock, they talk about strategies for avoiding it (for removing the problem) in terms of these strategies:&lt;br /&gt;
&lt;br /&gt;
 1 prevention - construct your system so that deadlock can never happen. (Make it impossible) Design your system such that one of these or more go away. &lt;br /&gt;
&lt;br /&gt;
let&#039;s say one thread has three locks to continue - whenever one goes to sleep, I&#039;ll take their chopstick and give it back to them ebfore they wake up and they&#039;ll never know the difference.&lt;br /&gt;
&lt;br /&gt;
 2 avoidance - prevention means you are making it impossible for this to happen. &lt;br /&gt;
&lt;br /&gt;
All four conditions are there in principle, but you can watch the unfolding of the computation, and you can notice when you are getting into a situation that can lead to deadlock, I can avoid it. Allocating resources such that you know that it&#039;s never going to happen. It&#039;s not necessarily prediction, where you lay a schedule for how everything operates. For example, let&#039;s say we are talking about car accidents - complete prevention - don&#039;t get into the car. Avoidance - you see something coming, you steer around it, or you have strategies like, stay within the lanes, don&#039;t go off the road.&lt;br /&gt;
&lt;br /&gt;
 3 - detect and recover - you had an accident - sorry , call the police, call the bodyshop - fix it up.&lt;br /&gt;
&lt;br /&gt;
in practice we mostly do detect and recover. you don&#039;t do all of them perfectly. Where watchdog timers come in? it&#039;s something that watches the system and then detects, an easy way of doing this ... let&#039;s say you have to call in to, a guard is going around, and when they are checking the perimeter, they have to check in periodically and say, ok that&#039;s fine, if someone was to attack the base, what would they do? they take out the guard. Then the signal wouldn&#039;t come in. Then you take steps to deal with it. &lt;br /&gt;
&lt;br /&gt;
A watchdog timer, is a separate processor, that is periodically sending it messages - normal interrupts to the system, if the OS is working properly and keeping sending messages back to the interrupt. but if you don&#039;t respond to the watchdog timer&#039;s request, it goes uh-oh and restarts the system. spontaneous reboot is performed to ensure the system keeps running. The assumption being that when you reboot, you come back to a working state.&lt;br /&gt;
&lt;br /&gt;
Two non-deadlock concurrency bugs:&lt;br /&gt;
- atomicity violations - you were supposed to lock it and you didn&#039;t, you were supposed to grab a lock and you didn&#039;t.&lt;br /&gt;
- order violation - you attempt to use something that hasn&#039;t been initialized, use before initialize&lt;br /&gt;
&lt;br /&gt;
TOCTTOU&lt;br /&gt;
&lt;br /&gt;
Time Of Check To Time Of Use&lt;br /&gt;
&lt;br /&gt;
race conditions - a particular class of them, in talking about memory accesses to a variable, we check the value of the variable, and we try to make a change to it&lt;br /&gt;
&lt;br /&gt;
temporary files - you  have a program running, in the middle of running, it&#039;s potentially useful to generate temporary files, (dump data in the middle of running) where do they often go? in a shared directory (/tmp), your own files&lt;br /&gt;
&lt;br /&gt;
when you run programs that are somehow priviledged (setuid / setgid programs) - when you normally runa program on a unix like system, it runs as you, ls - ls is running as a program. which means they can access any files that you own, they have the privileges that you have. When you need to run programs that need more access. Classic situations include: lpr or passwd&lt;br /&gt;
&lt;br /&gt;
passwd program allows you to change your password - a secure hash of your password is stored in a file called: /etc/passwd /etc/shadow&lt;br /&gt;
&lt;br /&gt;
These files, do you want any one else to be able to change these files? No  sometimes regular users want to change their password, so I have this file that I need to keep protected, sometimes I have to allow access to it. This is not an OO system. These are files. So how do I make sure that only certain code can modify that file. You would have some programs that when they ran, they didnt&#039; run with the privileges of the person who ran it, but with the privileges of the person who owns it. This password program we want to run it with extra privileges, we want to run it as root - to change those files. How do I denote this - there is a bit in the protections, there is also the setuid and setgid bit.If these bits are set, then the program runs with the user or group of how the file is own. The password program when you run it, it has the setuid bit set, and it&#039;s owned by root. When you run it, it&#039;s run by root. Since it is owned by root, it is run as root. you hope passwd doesn&#039;t have any bugs, otherwise it could corrupt your passwd file.&lt;br /&gt;
&lt;br /&gt;
Why am I talking about this now? Because these sort of programs, setuid programs, are particularly vulnerable to these TOCTTOU vulnerabilities. It will want to access a file, and if you are not careful, it will access the file with the wrong privileges.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19399</id>
		<title>Operating Systems 2014F Lecture 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19399"/>
		<updated>2014-10-10T13:35:54Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Dining Philosophers problem&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When can you have deadlock?&lt;br /&gt;
&lt;br /&gt;
4 conditions must apply&lt;br /&gt;
&lt;br /&gt;
- mutual exclusion&lt;br /&gt;
&lt;br /&gt;
- hold and wait - you can grab a lock and wait for the next one, you can spin / go to sleep or something. You dont&#039; just do things like try the lock if you are successful, and then continue with the computation.&lt;br /&gt;
&lt;br /&gt;
- no pre-emption (pre-emption is taking the resource by force.) you can only have deadlock when people are polite.&lt;br /&gt;
&lt;br /&gt;
- circular wait that&#039;s why the dining philosopher&#039;s problem has a circular table - have to have something that a)  is waiting on one another - that&#039;s what gets it into the problem. &lt;br /&gt;
&lt;br /&gt;
you break any of these, you can&#039;t have deadlock.&lt;br /&gt;
&lt;br /&gt;
When people talk about deadlock, they talk about strategies for avoiding it (for removing the problem) in terms of these strategies:&lt;br /&gt;
&lt;br /&gt;
 1 prevention - construct your system so that deadlock can never happen. (Make it impossible) Design your system such that one of these or more go away. &lt;br /&gt;
&lt;br /&gt;
let&#039;s say one thread has three locks to continue - whenever one goes to sleep, I&#039;ll take their chopstick and give it back to them ebfore they wake up and they&#039;ll never know the difference.&lt;br /&gt;
&lt;br /&gt;
 2 avoidance - prevention means you are making it impossible for this to happen. &lt;br /&gt;
&lt;br /&gt;
All four conditions are there in principle, but you can watch the unfolding of the computation, and you can notice when you are getting into a situation that can lead to deadlock, I can avoid it. Allocating resources such that you know that it&#039;s never going to happen. It&#039;s not necessarily prediction, where you lay a schedule for how everything operates. For example, let&#039;s say we are talking about car accidents - complete prevention - don&#039;t get into the car. Avoidance - you see something coming, you steer around it, or you have strategies like, stay within the lanes, don&#039;t go off the road.&lt;br /&gt;
&lt;br /&gt;
 3 - detect and recover - you had an accident - sorry , call the police, call the bodyshop - fix it up.&lt;br /&gt;
&lt;br /&gt;
in practice we mostly do detect and recover. you don&#039;t do all of them perfectly. Where watchdog timers come in? it&#039;s something that watches the system and then detects, an easy way of doing this ... let&#039;s say you have to call in to, a guard is going around, and when they are checking the perimeter, they have to check in periodically and say, ok that&#039;s fine, if someone was to attack the base, what would they do? they take out the guard. Then the signal wouldn&#039;t come in. Then you take steps to deal with it. &lt;br /&gt;
&lt;br /&gt;
A watchdog timer, is a separate processor, that is periodically sending it messages - normal interrupts to the system, if the OS is working properly and keeping sending messages back to the interrupt. but if you don&#039;t respond to the watchdog timer&#039;s request, it goes uh-oh and restarts the system. spontaneous reboot is performed to ensure the system keeps running. The assumption being that when you reboot, you come back to a working state.&lt;br /&gt;
&lt;br /&gt;
Two non-deadlock concurrency bugs:&lt;br /&gt;
- atomicity violations - you were supposed to lock it and you didn&#039;t, you were supposed to grab a lock and you didn&#039;t.&lt;br /&gt;
- order violation - you attempt to use something that hasn&#039;t been initialized, use before initialize&lt;br /&gt;
&lt;br /&gt;
TOCTTOU&lt;br /&gt;
&lt;br /&gt;
Time Of Check To Time Of Use&lt;br /&gt;
&lt;br /&gt;
race conditions - a particular class of them, in talking about memory accesses to a variable, we check the value of the variable, and we try to make a change to it&lt;br /&gt;
&lt;br /&gt;
temporary files - you  have a program running, in the middle of running, it&#039;s potentially useful to generate temporary files, (dump data in the middle of running) where do they often go? in a shared directory (/tmp), your own files&lt;br /&gt;
&lt;br /&gt;
when you run programs that are somehow priviledged (setuid / setgid programs) - when you normally runa program on a unix like system, it runs as you, ls - ls is running as a program. which means they can access any files that you own, they have the privileges that you have. When you need to run programs that need more access. Classic situations include: lpr or passwd&lt;br /&gt;
&lt;br /&gt;
passwd program allows you to change your password - a secure hash of your password is stored in a file called: /etc/passwd /etc/shadow&lt;br /&gt;
&lt;br /&gt;
These files, do you want any one else to be able to change these files? No  sometimes regular users want to change their password, so I have this file that I need to keep protected, sometimes I have to allow access to it. This is not an OO system. These are files. So how do I make sure that only certain code can modify that file. You would have some programs that when they ran, they didnt&#039; run with the privileges of the person who ran it, but with the privileges of the person who owns it. This password program we want to run it with extra privileges, we want to run it as root - to change those files. How do I denote this - there is a bit in the protections, there is also the setuid and setgid bit.If these bits are set, then the program runs with the user or group of how the file is own. The password program when you run it, it has the setuid bit set, and it&#039;s owned by root. When you run it, it&#039;s run by root. Since it is owned by root, it is run as root. you hope passwd doesn&#039;t have any bugs, otherwise it could corrupt your passwd file.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19398</id>
		<title>Operating Systems 2014F Lecture 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19398"/>
		<updated>2014-10-10T13:32:28Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Dining Philosophers problem&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When can you have deadlock?&lt;br /&gt;
&lt;br /&gt;
4 conditions must apply&lt;br /&gt;
&lt;br /&gt;
- mutual exclusion&lt;br /&gt;
&lt;br /&gt;
- hold and wait - you can grab a lock and wait for the next one, you can spin / go to sleep or something. You dont&#039; just do things like try the lock if you are successful, and then continue with the computation.&lt;br /&gt;
&lt;br /&gt;
- no pre-emption (pre-emption is taking the resource by force.) you can only have deadlock when people are polite.&lt;br /&gt;
&lt;br /&gt;
- circular wait that&#039;s why the dining philosopher&#039;s problem has a circular table - have to have something that a)  is waiting on one another - that&#039;s what gets it into the problem. &lt;br /&gt;
&lt;br /&gt;
you break any of these, you can&#039;t have deadlock.&lt;br /&gt;
&lt;br /&gt;
When people talk about deadlock, they talk about strategies for avoiding it (for removing the problem) in terms of these strategies:&lt;br /&gt;
&lt;br /&gt;
 1 prevention - construct your system so that deadlock can never happen. (Make it impossible) Design your system such that one of these or more go away. &lt;br /&gt;
&lt;br /&gt;
let&#039;s say one thread has three locks to continue - whenever one goes to sleep, I&#039;ll take their chopstick and give it back to them ebfore they wake up and they&#039;ll never know the difference.&lt;br /&gt;
&lt;br /&gt;
 2 avoidance - prevention means you are making it impossible for this to happen. &lt;br /&gt;
&lt;br /&gt;
All four conditions are there in principle, but you can watch the unfolding of the computation, and you can notice when you are getting into a situation that can lead to deadlock, I can avoid it. Allocating resources such that you know that it&#039;s never going to happen. It&#039;s not necessarily prediction, where you lay a schedule for how everything operates. For example, let&#039;s say we are talking about car accidents - complete prevention - don&#039;t get into the car. Avoidance - you see something coming, you steer around it, or you have strategies like, stay within the lanes, don&#039;t go off the road.&lt;br /&gt;
&lt;br /&gt;
 3 - detect and recover - you had an accident - sorry , call the police, call the bodyshop - fix it up.&lt;br /&gt;
&lt;br /&gt;
in practice we mostly do detect and recover. you don&#039;t do all of them perfectly. Where watchdog timers come in? it&#039;s something that watches the system and then detects, an easy way of doing this ... let&#039;s say you have to call in to, a guard is going around, and when they are checking the perimeter, they have to check in periodically and say, ok that&#039;s fine, if someone was to attack the base, what would they do? they take out the guard. Then the signal wouldn&#039;t come in. Then you take steps to deal with it. &lt;br /&gt;
&lt;br /&gt;
A watchdog timer, is a separate processor, that is periodically sending it messages - normal interrupts to the system, if the OS is working properly and keeping sending messages back to the interrupt. but if you don&#039;t respond to the watchdog timer&#039;s request, it goes uh-oh and restarts the system. spontaneous reboot is performed to ensure the system keeps running. The assumption being that when you reboot, you come back to a working state.&lt;br /&gt;
&lt;br /&gt;
Two non-deadlock concurrency bugs:&lt;br /&gt;
- atomicity violations - you were supposed to lock it and you didn&#039;t, you were supposed to grab a lock and you didn&#039;t.&lt;br /&gt;
- order violation - you attempt to use something that hasn&#039;t been initialized, use before initialize&lt;br /&gt;
&lt;br /&gt;
TOCTTOU&lt;br /&gt;
&lt;br /&gt;
Time Of Check To Time Of Use&lt;br /&gt;
&lt;br /&gt;
race conditions - a particular class of them, in talking about memory accesses to a variable, we check the value of the variable, and we try to make a change to it&lt;br /&gt;
&lt;br /&gt;
temporary files - you  have a program running, in the middle of running, it&#039;s potentially useful to generate temporary files, (dump data in the middle of running) where do they often go? in a shared directory (/tmp), your own files&lt;br /&gt;
&lt;br /&gt;
when you run programs that are somehow priviledged (setuid / setgid programs) - when you normally runa program on a unix like system, it runs as you, ls - ls is running as a program. which means they can access any files that you own, they have the privileges that you have. When you need to run programs that need more access. Classic situations include: lpr or passwd&lt;br /&gt;
&lt;br /&gt;
passwd program allows you to change your password - a secure hash of your password is stored in a file called: /etc/passwd /etc/shadow&lt;br /&gt;
&lt;br /&gt;
These files, do you want any one else to be able to change these files? No  sometimes regular users want to change their password, so I have this file that I need to keep protected, sometimes I have to allow access to it. This is not an OO system. These are files. So how do I make sure that only certain code can modify that file. You would have some programs that when they ran, they didnt&#039; run with the privileges of the person who ran it, but with the privileges of the person who owns it.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19397</id>
		<title>Operating Systems 2014F Lecture 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19397"/>
		<updated>2014-10-10T13:15:46Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Dining Philosophers problem&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When can you have deadlock?&lt;br /&gt;
&lt;br /&gt;
4 conditions must apply&lt;br /&gt;
&lt;br /&gt;
- mutual exclusion&lt;br /&gt;
&lt;br /&gt;
- hold and wait - you can grab a lock and wait for the next one, you can spin / go to sleep or something. You dont&#039; just do things like try the lock if you are successful, and then continue with the computation.&lt;br /&gt;
&lt;br /&gt;
- no pre-emption (pre-emption is taking the resource by force.) you can only have deadlock when people are polite.&lt;br /&gt;
&lt;br /&gt;
- circular wait that&#039;s why the dining philosopher&#039;s problem has a circular table - have to have something that a)  is waiting on one another - that&#039;s what gets it into the problem. &lt;br /&gt;
&lt;br /&gt;
you break any of these, you can&#039;t have deadlock.&lt;br /&gt;
&lt;br /&gt;
When people talk about deadlock, they talk about strategies for avoiding it (for removing the problem) in terms of these strategies:&lt;br /&gt;
&lt;br /&gt;
 1 prevention - construct your system so that deadlock can never happen. (Make it impossible) Design your system such that one of these or more go away. &lt;br /&gt;
&lt;br /&gt;
let&#039;s say one thread has three locks to continue - whenever one goes to sleep, I&#039;ll take their chopstick and give it back to them ebfore they wake up and they&#039;ll never know the difference.&lt;br /&gt;
&lt;br /&gt;
 2 avoidance - prevention means you are making it impossible for this to happen. &lt;br /&gt;
&lt;br /&gt;
All four conditions are there in principle, but you can watch the unfolding of the computation, and you can notice when you are getting into a situation that can lead to deadlock, I can avoid it. Allocating resources such that you know that it&#039;s never going to happen. It&#039;s not necessarily prediction, where you lay a schedule for how everything operates. For example, let&#039;s say we are talking about car accidents - complete prevention - don&#039;t get into the car. Avoidance - you see something coming, you steer around it, or you have strategies like, stay within the lanes, don&#039;t go off the road.&lt;br /&gt;
&lt;br /&gt;
 3 - detect and recover - you had an accident - sorry , call the police, call the bodyshop - fix it up.&lt;br /&gt;
&lt;br /&gt;
in practice we mostly do detect and recover. you don&#039;t do all of them perfectly. Where watchdog timers come in? it&#039;s something that watches the system and then detects, an easy way of doing this ... let&#039;s say you have to call in to, a guard is going around, and when they are checking the perimeter, they have to check in periodically and say, ok that&#039;s fine, if someone was to attack the base, what would they do? they take out the guard. Then the signal wouldn&#039;t come in. Then you take steps to deal with it. &lt;br /&gt;
&lt;br /&gt;
A watchdog timer, is a separate processor, that is periodically sending it messages - normal interrupts to the system, if the OS is working properly and keeping sending messages back to the interrupt. but if you don&#039;t respond to the watchdog timer&#039;s request, it goes uh-oh and restarts the system. spontaneous reboot is performed to ensure the system keeps running. The assumption being that when you reboot, you come back to a working state.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19396</id>
		<title>Operating Systems 2014F Lecture 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19396"/>
		<updated>2014-10-10T13:11:55Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Dining Philosophers problem&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When can you have deadlock?&lt;br /&gt;
&lt;br /&gt;
4 conditions must apply&lt;br /&gt;
&lt;br /&gt;
- mutual exclusion&lt;br /&gt;
&lt;br /&gt;
- hold and wait - you can grab a lock and wait for the next one, you can spin / go to sleep or something. You dont&#039; just do things like try the lock if you are successful, and then continue with the computation.&lt;br /&gt;
&lt;br /&gt;
- no pre-emption (pre-emption is taking the resource by force.) you can only have deadlock when people are polite.&lt;br /&gt;
&lt;br /&gt;
- circular wait that&#039;s why the dining philosopher&#039;s problem has a circular table - have to have something that a)  is waiting on one another - that&#039;s what gets it into the problem. &lt;br /&gt;
&lt;br /&gt;
you break any of these, you can&#039;t have deadlock.&lt;br /&gt;
&lt;br /&gt;
When people talk about deadlock, they talk about strategies for avoiding it (for removing the problem) in terms of these strategies:&lt;br /&gt;
&lt;br /&gt;
 1 prevention - construct your system so that deadlock can never happen. (Make it impossible) Design your system such that one of these or more go away. &lt;br /&gt;
&lt;br /&gt;
let&#039;s say one thread has three locks to continue - whenever one goes to sleep, I&#039;ll take their chopstick and give it back to them ebfore they wake up and they&#039;ll never know the difference.&lt;br /&gt;
&lt;br /&gt;
 2 avoidance - prevention means you are making it impossible for this to happen. &lt;br /&gt;
&lt;br /&gt;
All four conditions are there in principle, but you can watch the unfolding of the computation, and you can notice when you are getting into a situation that can lead to deadlock, I can avoid it. Allocating resources such that you know that it&#039;s never going to happen. It&#039;s not necessarily prediction, where you lay a schedule for how everything operates. For example, let&#039;s say we are talking about car accidents - complete prevention - don&#039;t get into the car. Avoidance - you see something coming, you steer around it, or you have strategies like, stay within the lanes, don&#039;t go off the road.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19395</id>
		<title>Operating Systems 2014F Lecture 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19395"/>
		<updated>2014-10-10T13:11:37Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Dining Philosophers problem&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When can you have deadlock?&lt;br /&gt;
&lt;br /&gt;
4 conditions must apply&lt;br /&gt;
&lt;br /&gt;
- mutual exclusion&lt;br /&gt;
&lt;br /&gt;
- hold and wait - you can grab a lock and wait for the next one, you can spin / go to sleep or something. You dont&#039; just do things like try the lock if you are successful, and then continue with the computation.&lt;br /&gt;
&lt;br /&gt;
- no pre-emption (pre-emption is taking the resource by force.) you can only have deadlock when people are polite.&lt;br /&gt;
&lt;br /&gt;
- circular wait that&#039;s why the dining philosopher&#039;s problem has a circular table - have to have something that a)  is waiting on one another - that&#039;s what gets it into the problem. &lt;br /&gt;
&lt;br /&gt;
you break any of these, you can&#039;t have deadlock.&lt;br /&gt;
&lt;br /&gt;
When people talk about deadlock, they talk about strategies for avoiding it (for removing the problem) in terms of these strategies:&lt;br /&gt;
&lt;br /&gt;
 1 prevention - construct your system so that deadlock can never happen. (Make it impossible) Design your system such that one of these or more go away. &lt;br /&gt;
&lt;br /&gt;
let&#039;s say one thread has three locks to continue - whenever one goes to sleep, I&#039;ll take their chopstick and give it back to them ebfore they wake up and they&#039;ll never know the difference.&lt;br /&gt;
&lt;br /&gt;
 2 avoidance - prevention means you are making it impossible for this to happen. All four conditions are there in principle, but you can watch the unfolding of the computation, and you can notice when you are getting into a situation that can lead to deadlock, I can avoid it. Allocating resources such that you know that it&#039;s never going to happen. It&#039;s not necessarily prediction, where you lay a schedule for how everything operates. For example, let&#039;s say we are talking about car accidents - complete prevention - don&#039;t get into the car. Avoidance - you see something coming, you steer around it, or you have strategies like, stay within the lanes, don&#039;t go off the road.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19394</id>
		<title>Operating Systems 2014F Lecture 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_11&amp;diff=19394"/>
		<updated>2014-10-10T13:07:49Z</updated>

		<summary type="html">&lt;p&gt;Afry: Created page with &amp;quot;Dining Philosophers problem   When can you have deadlock?  4 conditions must apply  - mutual exclusion  - hold and wait - you can grab a lock and wait for the next one, you ca...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Dining Philosophers problem&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When can you have deadlock?&lt;br /&gt;
&lt;br /&gt;
4 conditions must apply&lt;br /&gt;
&lt;br /&gt;
- mutual exclusion&lt;br /&gt;
&lt;br /&gt;
- hold and wait - you can grab a lock and wait for the next one, you can spin / go to sleep or something. You dont&#039; just do things like try the lock if you are successful, and then continue with the computation.&lt;br /&gt;
&lt;br /&gt;
- no pre-emption (pre-emption is taking the resource by force.) you can only have deadlock when people are polite.&lt;br /&gt;
&lt;br /&gt;
- circular wait that&#039;s why the dining philosopher&#039;s problem has a circular table - have to have something that a)  is waiting on one another - that&#039;s what gets it into the problem. &lt;br /&gt;
&lt;br /&gt;
you break any of these, you can&#039;t have deadlock.&lt;br /&gt;
&lt;br /&gt;
When people talk about deadlock, they talk about strategies for avoiding it (for removing the problem) in terms of these strategies:&lt;br /&gt;
&lt;br /&gt;
 1 prevention - construct your system so that deadlock can never happen. (Make it impossible)&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19383</id>
		<title>Operating Systems 2014F Lecture 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19383"/>
		<updated>2014-10-08T13:56:44Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Assignment 5 visit:&lt;br /&gt;
&lt;br /&gt;
[[File:Pagetableqass5.png]]&lt;br /&gt;
&lt;br /&gt;
By using base 16 vs. base 10 - yes the alphabet is larger. when you talk about hex vs. decimal, you are changing how we read it. There&#039;s something more correct about using hex vs. base 10. A Hexadecimal digit represents 4 bits. How many bits does base 10 represent? (somewhere between 8 and 16 - a fractional bit) it&#039;s messy. Decimal is not the native base of a computer. Base 16 maps perfectly to base 2. That&#039;s why you also see octal - 3 bits vs. 4 bits. (but a reduced range), when you use hexadecimal it represents 4 bits, and is a bit cleaner. &lt;br /&gt;
&lt;br /&gt;
When you say offset is a power of two, the page size is also a power of two. the storage used to encode the page size determines the range of the offset. It is a specific byte in a page. What is the range of bytes that can go in a page. Classic x86&lt;br /&gt;
&lt;br /&gt;
what is the range of offsets of a 4 k page? The offsets are 0 - 4095. Another thing that you will need for the assignment, it&#039;s a question that refers to Peterson&#039;s algorithm. It doesn&#039;t work on modern machines. The big idea is the memory hierarchy. Key assumption: when one processor writes to memory the other processors will see that write, that value. Is that true in modern systems? No. &lt;br /&gt;
&lt;br /&gt;
Concurrency Mechanisms:&lt;br /&gt;
&lt;br /&gt;
locks&lt;br /&gt;
condition variables&lt;br /&gt;
Semaphores&lt;br /&gt;
&lt;br /&gt;
you build all of these using the same mechanisms. Let&#039;s you check a value and change it in one atomic operation. The basic functionality of the hardware. The semantics are different in these higher level constructs. When do you use what? What are the purpose of these things? there&#039;s huge amounts of overlap. what you are talking about with these things, is that you have some sort of storage that has atomic operations so you can check it&#039;s value adn change it, all at one time (so you don&#039;t have race conditions when you manipulate the value) then you have a queue of threads.&lt;br /&gt;
&lt;br /&gt;
The difference between these, is related to how you deal with the interaction between the storage and the queue of threads. &lt;br /&gt;
IF you have a lock, grab lock, and release the lock, does releasing the lock always succeed? Yes, if you have the lock, and you release it, it always succeeds. grab lock can block. when we talk about blocking, it&#039;s going to sit there and wait for the condition to happen, nothing is going to happen to that thread until that condition is met. You often have a try lock, try lock means check to see if the lock is grabbable or not. it will grab the lock if it&#039;s available, but if it&#039;s not available it will return, it&#039;s non-blocking, because maybe you don&#039;t want to sit there and block. Most of the time you will try to grab the lock and if the lock fails, it will wait until grabbing the lock succeeds.&lt;br /&gt;
&lt;br /&gt;
what if 5 processes try to grab the lock all at once, what do the rest do? they are all blocked, there are two ways for them to wait. One is to just go into an infinite loop, it just keeps trying trying trying until it succeeds. It is sometimes a good idea, this is called a spin lock. If the CPU with running the spin lock, doing nothing else, it is useless. Not very good for making use of efficient resources. You expect the time for you to wait is very small. no one is going to be using hte lock structure for very long. it&#039;s better to sit there and wait. It&#039;s like if you go to a gov&#039;t office, and you want something to happen, you will probably have to wait in line. When you are spin locking, you aresitting in the queue doing nothing. What you can also do instead is you can have a wait queue: in here we say, they queue up, they wait, you are freeing up the resources to do something else. the particular processor running the thread, it encounters things behind it, it puts that thread to sleep. what does sleep mean? Sleep means, the thread goes to sleep that implies something about the scheduling of the thread. IT is kicked off the cpu it&#039;s running on. I am not going to consume any resources right now, when you wake up, in biological terms, it&#039;s still consuming resources. It&#039;s still consuming space resources (still consuming ram, still living in cache but it is not being actively run by a core. When you say a thread yields, I don&#039;t need this CPU anymore. when you call sleep you are making a system call that says, I don&#039;t want to do anything for at least 10 seconds. You are taken out of the list of running processes, you are put on the list of block processes, and you wait for the timer to go off. Then your process is brought back in and is allowed to run. It depends on the time of the CPU. In general you should sleep at least for that amount of time.&lt;br /&gt;
&lt;br /&gt;
[[File:locks.png]]&lt;br /&gt;
&lt;br /&gt;
When we are talking about locks we are talking about multiple threads. Processes do not have shared storage. It&#039;s logically separate copies. For threads they share this address space, and use this to coordinate their operations. The lock is not available, it queues up waiting for the lock. You have these threads lined up waiting for the service &lt;br /&gt;
&lt;br /&gt;
Wait = sleep&lt;br /&gt;
&lt;br /&gt;
if it is actually actively waiting running on the cpu that is it spinning, you don&#039;t queue up with spinning, because it&#039;s every core is just sitting there spinning. For 5 threads to be blocked on a spin lock, all 5 threads are running on 5 cores. You don&#039;t use spin locks on things you expect to wait long for. Sometimes you want to use a spin lock. Because it takes effort to put it to sleep. It&#039;s a relatively expensive thing to put a thread to sleep, and then have it wake up. When you put a thread to sleep it potentially can involve a context switch, it depends on how threads are implemented. It involves a system call, because then you call the sleep(). If you are going to be waiting less than the time it takes to execute 10 - 50 instructions, you want to use a spin lock. &lt;br /&gt;
&lt;br /&gt;
with a lock all the threads queue up, then when the one that was active gets done. It&#039;s a fifo queue. They take their turns. &lt;br /&gt;
&lt;br /&gt;
[[File:Conditionvariables.png]]&lt;br /&gt;
&lt;br /&gt;
A condition variable has the same basic idea. we have a condition that we want to check and we have a wait queue, but the difference is, is in how we treat the queue. I&#039;m waiting for something to happen, a key to be pressed / waiting for something to come in for the network, waiting for something to happen, the other threads too have gotten so far in the computation, what you are doing is check condition (blocking), check condition (non-blocking) and set condition. - seems kind of the same as a lock. What&#039;s really happening here is the normal expected thing when you check for the condition, is that you normally do the blocking one. i&#039;m going to sleep until the condition has happened. The set condition says that it happened. The difference lies in how you treat the queue. with a lock you are waiting for exclusive access to something (an entire data structure, a tree or something) Only one thread can access the resource at a time, enforcing mutual exclusion, we want serial semantics. With a condition variable we are not trying to enforce serial semantics. Tell everyone! When the condition becomes true, everyone in the queue wakes up and gets service. Mechanism basic pieces underlying is the same but the semantics are different. Think of a club, are the doors open? Everyone flows in. Instead of sitting at the passport office, 1 person goes up, another person goes up. In a concurrent appplication you will use these different constructs. It&#039;s generalyl about protecting access to a datastructure. where as a condition variable is about waiting for some state of computation / state of the environment has come to pass. In the locks its saying can I get exclusive access to the resource? Normally in code that uses locks, there will be some mutex. (mutual exclusion) binary semaphore&lt;br /&gt;
&lt;br /&gt;
[[File:semaphore.png]]&lt;br /&gt;
&lt;br /&gt;
semaphores : up / down  a general counting semaphore the value can be any integer. You can use semaphores to implement both condition variables and locks. The idea of a semaphore is: down blocks only if it is less than 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when you say up, you increment it back up. A lock in this case, is if you set the value to 1. Instead of saying 1 can be present, really a semaphore is like a bathroom, 4 people can use the bathroom at a time. When someone goes in, someone&#039;s come out, now someone can go back in, if the room is empty, there is a certain number of resources, you can go in, but if it fills up, everyone is waiting. In practice if you are looking at how people build things. People use mutexs all the time condition variables have clear semantics, semaphores are more of a building block, I want to only have so many threads running at the same time. Its&#039; much harder to reason about. You have to be very careful with your downs and ups. They are confusing but you should know about what they are.&lt;br /&gt;
&lt;br /&gt;
What&#039;s in the midterm, the basic coverage of the midterm is the assignments and the tutorials, with a focus on the assignments. The key thing with pthreads, it&#039;s supposed to be portable across unix like systems. The exact system calls used for this can vary. when you do pthreads, it&#039;s doing the low level assembly. that&#039;s present as well. it&#039;s a mix of things, it can be implemented in user space partly and partly in the kernel. The mapping between the lowl evel mechanisms and the api is non-trivial. When you look at the code, it&#039;s surprisingly complex. Through to assignment 5 is going to be on the midterm. If you look at assignment 5, it has bits and pieces of the lectures last week and this week.&lt;br /&gt;
&lt;br /&gt;
producer / consumer - you should think of a pipe. On the command line you could do these things like: sort foo | uniq&lt;br /&gt;
&lt;br /&gt;
it will sort the lines of the file, and the thing uniq will remove any duplicates of the lines. If I had another command to add in linebreaks for every word. pipe operator means :take stdout from one program, and provide it as stdin for the following program.&lt;br /&gt;
&lt;br /&gt;
We are talking about processes, the one on the left side of the pipe is the producer. The one on the right is the consumer. The pipe operation does not work on a file, it works on processes. ls you always think of as terminating, but you could do something that produces never ending values.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
tail is runnign continuously, is this thing constantly cycling just waiting? is there input? no, there&#039;s not, is it every character that comes out of tail, does each character get processed by grep, is that how these are interacting? that would be dumb, we want these to be coordinated, but we don&#039;t want it to have to be, that every byte in the kernel it has to control the relative rates of how fast these run. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
cat /dev/random | sort | uniq&lt;br /&gt;
&lt;br /&gt;
tail -f /var/log/syslog | grep anil&lt;br /&gt;
&lt;br /&gt;
Coordination problem, the producer could run faster, or the consumer runs faster. If the producer runs too fast for the consumer, you may want to to them to sleep. If the consumer runs too fast, you want it to run for a while and put it to sleep until you can let the producer to produce something. &lt;br /&gt;
&lt;br /&gt;
Tail should not keep running, unless grep can keep up. this is the producer consumer pattern. in order for it to work you have to use condition variables and locks.&lt;br /&gt;
The key idea here is that you have some storage between these. You have instead a buffer, the buffer can be arbitrarily sized, it can grow and shrink, you let the producer produce for a while and the consumer consume for a while. A buffer is a circular list, when you get to the end you start at the beginning.&lt;br /&gt;
&lt;br /&gt;
Guarantee exclusive access, you have to have a mutex to ensure they don&#039;t step on each other. (for the entire data structure) - buffer make sure both aren&#039;t messing with this at the same time.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19382</id>
		<title>Operating Systems 2014F Lecture 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19382"/>
		<updated>2014-10-08T13:53:36Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Assignment 5 visit:&lt;br /&gt;
&lt;br /&gt;
[[File:Pagetableqass5.png]]&lt;br /&gt;
&lt;br /&gt;
By using base 16 vs. base 10 - yes the alphabet is larger. when you talk about hex vs. decimal, you are changing how we read it. There&#039;s something more correct about using hex vs. base 10. A Hexadecimal digit represents 4 bits. How many bits does base 10 represent? (somewhere between 8 and 16 - a fractional bit) it&#039;s messy. Decimal is not the native base of a computer. Base 16 maps perfectly to base 2. That&#039;s why you also see octal - 3 bits vs. 4 bits. (but a reduced range), when you use hexadecimal it represents 4 bits, and is a bit cleaner. &lt;br /&gt;
&lt;br /&gt;
When you say offset is a power of two, the page size is also a power of two. the storage used to encode the page size determines the range of the offset. It is a specific byte in a page. What is the range of bytes that can go in a page. Classic x86&lt;br /&gt;
&lt;br /&gt;
what is the range of offsets of a 4 k page? The offsets are 0 - 4095. Another thing that you will need for the assignment, it&#039;s a question that refers to Peterson&#039;s algorithm. It doesn&#039;t work on modern machines. The big idea is the memory hierarchy. Key assumption: when one processor writes to memory the other processors will see that write, that value. Is that true in modern systems? No. &lt;br /&gt;
&lt;br /&gt;
Concurrency Mechanisms:&lt;br /&gt;
&lt;br /&gt;
locks&lt;br /&gt;
condition variables&lt;br /&gt;
Semaphores&lt;br /&gt;
&lt;br /&gt;
you build all of these using the same mechanisms. Let&#039;s you check a value and change it in one atomic operation. The basic functionality of the hardware. The semantics are different in these higher level constructs. When do you use what? What are the purpose of these things? there&#039;s huge amounts of overlap. what you are talking about with these things, is that you have some sort of storage that has atomic operations so you can check it&#039;s value adn change it, all at one time (so you don&#039;t have race conditions when you manipulate the value) then you have a queue of threads.&lt;br /&gt;
&lt;br /&gt;
The difference between these, is related to how you deal with the interaction between the storage and the queue of threads. &lt;br /&gt;
IF you have a lock, grab lock, and release the lock, does releasing the lock always succeed? Yes, if you have the lock, and you release it, it always succeeds. grab lock can block. when we talk about blocking, it&#039;s going to sit there and wait for the condition to happen, nothing is going to happen to that thread until that condition is met. You often have a try lock, try lock means check to see if the lock is grabbable or not. it will grab the lock if it&#039;s available, but if it&#039;s not available it will return, it&#039;s non-blocking, because maybe you don&#039;t want to sit there and block. Most of the time you will try to grab the lock and if the lock fails, it will wait until grabbing the lock succeeds.&lt;br /&gt;
&lt;br /&gt;
what if 5 processes try to grab the lock all at once, what do the rest do? they are all blocked, there are two ways for them to wait. One is to just go into an infinite loop, it just keeps trying trying trying until it succeeds. It is sometimes a good idea, this is called a spin lock. If the CPU with running the spin lock, doing nothing else, it is useless. Not very good for making use of efficient resources. You expect the time for you to wait is very small. no one is going to be using hte lock structure for very long. it&#039;s better to sit there and wait. It&#039;s like if you go to a gov&#039;t office, and you want something to happen, you will probably have to wait in line. When you are spin locking, you aresitting in the queue doing nothing. What you can also do instead is you can have a wait queue: in here we say, they queue up, they wait, you are freeing up the resources to do something else. the particular processor running the thread, it encounters things behind it, it puts that thread to sleep. what does sleep mean? Sleep means, the thread goes to sleep that implies something about the scheduling of the thread. IT is kicked off the cpu it&#039;s running on. I am not going to consume any resources right now, when you wake up, in biological terms, it&#039;s still consuming resources. It&#039;s still consuming space resources (still consuming ram, still living in cache but it is not being actively run by a core. When you say a thread yields, I don&#039;t need this CPU anymore. when you call sleep you are making a system call that says, I don&#039;t want to do anything for at least 10 seconds. You are taken out of the list of running processes, you are put on the list of block processes, and you wait for the timer to go off. Then your process is brought back in and is allowed to run. It depends on the time of the CPU. In general you should sleep at least for that amount of time.&lt;br /&gt;
&lt;br /&gt;
[[File:locks.png]]&lt;br /&gt;
&lt;br /&gt;
When we are talking about locks we are talking about multiple threads. Processes do not have shared storage. It&#039;s logically separate copies. For threads they share this address space, and use this to coordinate their operations. The lock is not available, it queues up waiting for the lock. You have these threads lined up waiting for the service &lt;br /&gt;
&lt;br /&gt;
Wait = sleep&lt;br /&gt;
&lt;br /&gt;
if it is actually actively waiting running on the cpu that is it spinning, you don&#039;t queue up with spinning, because it&#039;s every core is just sitting there spinning. For 5 threads to be blocked on a spin lock, all 5 threads are running on 5 cores. You don&#039;t use spin locks on things you expect to wait long for. Sometimes you want to use a spin lock. Because it takes effort to put it to sleep. It&#039;s a relatively expensive thing to put a thread to sleep, and then have it wake up. When you put a thread to sleep it potentially can involve a context switch, it depends on how threads are implemented. It involves a system call, because then you call the sleep(). If you are going to be waiting less than the time it takes to execute 10 - 50 instructions, you want to use a spin lock. &lt;br /&gt;
&lt;br /&gt;
with a lock all the threads queue up, then when the one that was active gets done. It&#039;s a fifo queue. They take their turns. &lt;br /&gt;
&lt;br /&gt;
[[File:Conditionvariables.png]]&lt;br /&gt;
&lt;br /&gt;
A condition variable has the same basic idea. we have a condition that we want to check and we have a wait queue, but the difference is, is in how we treat the queue. I&#039;m waiting for something to happen, a key to be pressed / waiting for something to come in for the network, waiting for something to happen, the other threads too have gotten so far in the computation, what you are doing is check condition (blocking), check condition (non-blocking) and set condition. - seems kind of the same as a lock. What&#039;s really happening here is the normal expected thing when you check for the condition, is that you normally do the blocking one. i&#039;m going to sleep until the condition has happened. The set condition says that it happened. The difference lies in how you treat the queue. with a lock you are waiting for exclusive access to something (an entire data structure, a tree or something) Only one thread can access the resource at a time, enforcing mutual exclusion, we want serial semantics. With a condition variable we are not trying to enforce serial semantics. Tell everyone! When the condition becomes true, everyone in the queue wakes up and gets service. Mechanism basic pieces underlying is the same but the semantics are different. Think of a club, are the doors open? Everyone flows in. Instead of sitting at the passport office, 1 person goes up, another person goes up. In a concurrent appplication you will use these different constructs. It&#039;s generalyl about protecting access to a datastructure. where as a condition variable is about waiting for some state of computation / state of the environment has come to pass. In the locks its saying can I get exclusive access to the resource? Normally in code that uses locks, there will be some mutex. (mutual exclusion) binary semaphore&lt;br /&gt;
&lt;br /&gt;
[[File:semaphore.png]]&lt;br /&gt;
&lt;br /&gt;
semaphores : up / down  a general counting semaphore the value can be any integer. You can use semaphores to implement both condition variables and locks. The idea of a semaphore is: down blocks only if it is less than 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when you say up, you increment it back up. A lock in this case, is if you set the value to 1. Instead of saying 1 can be present, really a semaphore is like a bathroom, 4 people can use the bathroom at a time. When someone goes in, someone&#039;s come out, now someone can go back in, if the room is empty, there is a certain number of resources, you can go in, but if it fills up, everyone is waiting. In practice if you are looking at how people build things. People use mutexs all the time condition variables have clear semantics, semaphores are more of a building block, I want to only have so many threads running at the same time. Its&#039; much harder to reason about. You have to be very careful with your downs and ups. They are confusing but you should know about what they are.&lt;br /&gt;
&lt;br /&gt;
What&#039;s in the midterm, the basic coverage of the midterm is the assignments and the tutorials, with a focus on the assignments. The key thing with pthreads, it&#039;s supposed to be portable across unix like systems. The exact system calls used for this can vary. when you do pthreads, it&#039;s doing the low level assembly. that&#039;s present as well. it&#039;s a mix of things, it can be implemented in user space partly and partly in the kernel. The mapping between the lowl evel mechanisms and the api is non-trivial. When you look at the code, it&#039;s surprisingly complex. Through to assignment 5 is going to be on the midterm. If you look at assignment 5, it has bits and pieces of the lectures last week and this week.&lt;br /&gt;
&lt;br /&gt;
producer / consumer - you should think of a pipe. On the command line you could do these things like: sort foo | uniq&lt;br /&gt;
&lt;br /&gt;
it will sort the lines of the file, and the thing uniq will remove any duplicates of the lines. If I had another command to add in linebreaks for every word. pipe operator means :take stdout from one program, and provide it as stdin for the following program.&lt;br /&gt;
&lt;br /&gt;
We are talking about processes, the one on the left side of the pipe is the producer. The one on the right is the consumer. The pipe operation does not work on a file, it works on processes. ls you always think of as terminating, but you could do something that produces never ending values.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
tail is runnign continuously, is this thing constantly cycling just waiting? is there input? no, there&#039;s not, is it every character that comes out of tail, does each character get processed by grep, is that how these are interacting? that would be dumb, we want these to be coordinated, but we don&#039;t want it to have to be, that every byte in the kernel it has to control the relative rates of how fast these run. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
cat /dev/random | sort | uniq&lt;br /&gt;
&lt;br /&gt;
tail -f /var/log/syslog | grep anil&lt;br /&gt;
&lt;br /&gt;
Coordination problem, the producer could run faster, or the consumer runs faster. If the producer runs too fast for the consumer, you may want to to them to sleep. If the consumer runs too fast, you want it to run for a while and put it to sleep until you can let the producer to produce something. &lt;br /&gt;
&lt;br /&gt;
Tail should not keep running, unless grep can keep up. this is the producer consumer pattern. in order for it to work you have to use condition variables and locks.&lt;br /&gt;
The key idea here is that you have some storage between these. You have instead a buffer, the buffer can be arbitrarily sized, it can grow and shrink, you let the producer produce for a while and the consumer consume for a while. A buffer is a circular list, when you get to the end you start at the beginning.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19381</id>
		<title>Operating Systems 2014F Lecture 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19381"/>
		<updated>2014-10-08T13:51:59Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Assignment 5 visit:&lt;br /&gt;
&lt;br /&gt;
[[File:Pagetableqass5.png]]&lt;br /&gt;
&lt;br /&gt;
By using base 16 vs. base 10 - yes the alphabet is larger. when you talk about hex vs. decimal, you are changing how we read it. There&#039;s something more correct about using hex vs. base 10. A Hexadecimal digit represents 4 bits. How many bits does base 10 represent? (somewhere between 8 and 16 - a fractional bit) it&#039;s messy. Decimal is not the native base of a computer. Base 16 maps perfectly to base 2. That&#039;s why you also see octal - 3 bits vs. 4 bits. (but a reduced range), when you use hexadecimal it represents 4 bits, and is a bit cleaner. &lt;br /&gt;
&lt;br /&gt;
When you say offset is a power of two, the page size is also a power of two. the storage used to encode the page size determines the range of the offset. It is a specific byte in a page. What is the range of bytes that can go in a page. Classic x86&lt;br /&gt;
&lt;br /&gt;
what is the range of offsets of a 4 k page? The offsets are 0 - 4095. Another thing that you will need for the assignment, it&#039;s a question that refers to Peterson&#039;s algorithm. It doesn&#039;t work on modern machines. The big idea is the memory hierarchy. Key assumption: when one processor writes to memory the other processors will see that write, that value. Is that true in modern systems? No. &lt;br /&gt;
&lt;br /&gt;
Concurrency Mechanisms:&lt;br /&gt;
&lt;br /&gt;
locks&lt;br /&gt;
condition variables&lt;br /&gt;
Semaphores&lt;br /&gt;
&lt;br /&gt;
you build all of these using the same mechanisms. Let&#039;s you check a value and change it in one atomic operation. The basic functionality of the hardware. The semantics are different in these higher level constructs. When do you use what? What are the purpose of these things? there&#039;s huge amounts of overlap. what you are talking about with these things, is that you have some sort of storage that has atomic operations so you can check it&#039;s value adn change it, all at one time (so you don&#039;t have race conditions when you manipulate the value) then you have a queue of threads.&lt;br /&gt;
&lt;br /&gt;
The difference between these, is related to how you deal with the interaction between the storage and the queue of threads. &lt;br /&gt;
IF you have a lock, grab lock, and release the lock, does releasing the lock always succeed? Yes, if you have the lock, and you release it, it always succeeds. grab lock can block. when we talk about blocking, it&#039;s going to sit there and wait for the condition to happen, nothing is going to happen to that thread until that condition is met. You often have a try lock, try lock means check to see if the lock is grabbable or not. it will grab the lock if it&#039;s available, but if it&#039;s not available it will return, it&#039;s non-blocking, because maybe you don&#039;t want to sit there and block. Most of the time you will try to grab the lock and if the lock fails, it will wait until grabbing the lock succeeds.&lt;br /&gt;
&lt;br /&gt;
what if 5 processes try to grab the lock all at once, what do the rest do? they are all blocked, there are two ways for them to wait. One is to just go into an infinite loop, it just keeps trying trying trying until it succeeds. It is sometimes a good idea, this is called a spin lock. If the CPU with running the spin lock, doing nothing else, it is useless. Not very good for making use of efficient resources. You expect the time for you to wait is very small. no one is going to be using hte lock structure for very long. it&#039;s better to sit there and wait. It&#039;s like if you go to a gov&#039;t office, and you want something to happen, you will probably have to wait in line. When you are spin locking, you aresitting in the queue doing nothing. What you can also do instead is you can have a wait queue: in here we say, they queue up, they wait, you are freeing up the resources to do something else. the particular processor running the thread, it encounters things behind it, it puts that thread to sleep. what does sleep mean? Sleep means, the thread goes to sleep that implies something about the scheduling of the thread. IT is kicked off the cpu it&#039;s running on. I am not going to consume any resources right now, when you wake up, in biological terms, it&#039;s still consuming resources. It&#039;s still consuming space resources (still consuming ram, still living in cache but it is not being actively run by a core. When you say a thread yields, I don&#039;t need this CPU anymore. when you call sleep you are making a system call that says, I don&#039;t want to do anything for at least 10 seconds. You are taken out of the list of running processes, you are put on the list of block processes, and you wait for the timer to go off. Then your process is brought back in and is allowed to run. It depends on the time of the CPU. In general you should sleep at least for that amount of time.&lt;br /&gt;
&lt;br /&gt;
[[File:locks.png]]&lt;br /&gt;
&lt;br /&gt;
When we are talking about locks we are talking about multiple threads. Processes do not have shared storage. It&#039;s logically separate copies. For threads they share this address space, and use this to coordinate their operations. The lock is not available, it queues up waiting for the lock. You have these threads lined up waiting for the service &lt;br /&gt;
&lt;br /&gt;
Wait = sleep&lt;br /&gt;
&lt;br /&gt;
if it is actually actively waiting running on the cpu that is it spinning, you don&#039;t queue up with spinning, because it&#039;s every core is just sitting there spinning. For 5 threads to be blocked on a spin lock, all 5 threads are running on 5 cores. You don&#039;t use spin locks on things you expect to wait long for. Sometimes you want to use a spin lock. Because it takes effort to put it to sleep. It&#039;s a relatively expensive thing to put a thread to sleep, and then have it wake up. When you put a thread to sleep it potentially can involve a context switch, it depends on how threads are implemented. It involves a system call, because then you call the sleep(). If you are going to be waiting less than the time it takes to execute 10 - 50 instructions, you want to use a spin lock. &lt;br /&gt;
&lt;br /&gt;
with a lock all the threads queue up, then when the one that was active gets done. It&#039;s a fifo queue. They take their turns. &lt;br /&gt;
&lt;br /&gt;
[[File:Conditionvariables.png]]&lt;br /&gt;
&lt;br /&gt;
A condition variable has the same basic idea. we have a condition that we want to check and we have a wait queue, but the difference is, is in how we treat the queue. I&#039;m waiting for something to happen, a key to be pressed / waiting for something to come in for the network, waiting for something to happen, the other threads too have gotten so far in the computation, what you are doing is check condition (blocking), check condition (non-blocking) and set condition. - seems kind of the same as a lock. What&#039;s really happening here is the normal expected thing when you check for the condition, is that you normally do the blocking one. i&#039;m going to sleep until the condition has happened. The set condition says that it happened. The difference lies in how you treat the queue. with a lock you are waiting for exclusive access to something (an entire data structure, a tree or something) Only one thread can access the resource at a time, enforcing mutual exclusion, we want serial semantics. With a condition variable we are not trying to enforce serial semantics. Tell everyone! When the condition becomes true, everyone in the queue wakes up and gets service. Mechanism basic pieces underlying is the same but the semantics are different. Think of a club, are the doors open? Everyone flows in. Instead of sitting at the passport office, 1 person goes up, another person goes up. In a concurrent appplication you will use these different constructs. It&#039;s generalyl about protecting access to a datastructure. where as a condition variable is about waiting for some state of computation / state of the environment has come to pass. In the locks its saying can I get exclusive access to the resource? Normally in code that uses locks, there will be some mutex. (mutual exclusion) binary semaphore&lt;br /&gt;
&lt;br /&gt;
[[File:semaphore.png]]&lt;br /&gt;
&lt;br /&gt;
semaphores : up / down  a general counting semaphore the value can be any integer. You can use semaphores to implement both condition variables and locks. The idea of a semaphore is: down blocks only if it is less than 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when you say up, you increment it back up. A lock in this case, is if you set the value to 1. Instead of saying 1 can be present, really a semaphore is like a bathroom, 4 people can use the bathroom at a time. When someone goes in, someone&#039;s come out, now someone can go back in, if the room is empty, there is a certain number of resources, you can go in, but if it fills up, everyone is waiting. In practice if you are looking at how people build things. People use mutexs all the time condition variables have clear semantics, semaphores are more of a building block, I want to only have so many threads running at the same time. Its&#039; much harder to reason about. You have to be very careful with your downs and ups. They are confusing but you should know about what they are.&lt;br /&gt;
&lt;br /&gt;
What&#039;s in the midterm, the basic coverage of the midterm is the assignments and the tutorials, with a focus on the assignments. The key thing with pthreads, it&#039;s supposed to be portable across unix like systems. The exact system calls used for this can vary. when you do pthreads, it&#039;s doing the low level assembly. that&#039;s present as well. it&#039;s a mix of things, it can be implemented in user space partly and partly in the kernel. The mapping between the lowl evel mechanisms and the api is non-trivial. When you look at the code, it&#039;s surprisingly complex. Through to assignment 5 is going to be on the midterm. If you look at assignment 5, it has bits and pieces of the lectures last week and this week.&lt;br /&gt;
&lt;br /&gt;
producer / consumer - you should think of a pipe. On the command line you could do these things like: sort foo | uniq&lt;br /&gt;
&lt;br /&gt;
it will sort the lines of the file, and the thing uniq will remove any duplicates of the lines. If I had another command to add in linebreaks for every word. pipe operator means :take stdout from one program, and provide it as stdin for the following program.&lt;br /&gt;
&lt;br /&gt;
We are talking about processes, the one on the left side of the pipe is the producer. The one on the right is the consumer. The pipe operation does not work on a file, it works on processes. ls you always think of as terminating, but you could do something that produces never ending values.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
tail is runnign continuously, is this thing constantly cycling just waiting? is there input? no, there&#039;s not, is it every character that comes out of tail, does each character get processed by grep, is that how these are interacting? that would be dumb, we want these to be coordinated, but we don&#039;t want it to have to be, that every byte in the kernel it has to control the relative rates of how fast these run. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
cat /dev/random | sort | uniq&lt;br /&gt;
&lt;br /&gt;
tail -f /var/log/syslog | grep anil&lt;br /&gt;
&lt;br /&gt;
coordination problem, the producer could run faster, or the consumer runs faster. If the producer runs too fast for the consumer, you may want to to them to sleep. If the consumer runs too fast, you want it to run for a while and put it to sleep until you can let the producer to produce something. &lt;br /&gt;
&lt;br /&gt;
Tail should not keep running, unless grep can keep up. this is the producer consumer pattern. in order for it to work you have to use condition variables and locks.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19380</id>
		<title>Operating Systems 2014F Lecture 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19380"/>
		<updated>2014-10-08T13:44:34Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Assignment 5 visit:&lt;br /&gt;
&lt;br /&gt;
[[File:Pagetableqass5.png]]&lt;br /&gt;
&lt;br /&gt;
By using base 16 vs. base 10 - yes the alphabet is larger. when you talk about hex vs. decimal, you are changing how we read it. There&#039;s something more correct about using hex vs. base 10. A Hexadecimal digit represents 4 bits. How many bits does base 10 represent? (somewhere between 8 and 16 - a fractional bit) it&#039;s messy. Decimal is not the native base of a computer. Base 16 maps perfectly to base 2. That&#039;s why you also see octal - 3 bits vs. 4 bits. (but a reduced range), when you use hexadecimal it represents 4 bits, and is a bit cleaner. &lt;br /&gt;
&lt;br /&gt;
When you say offset is a power of two, the page size is also a power of two. the storage used to encode the page size determines the range of the offset. It is a specific byte in a page. What is the range of bytes that can go in a page. Classic x86&lt;br /&gt;
&lt;br /&gt;
what is the range of offsets of a 4 k page? The offsets are 0 - 4095. Another thing that you will need for the assignment, it&#039;s a question that refers to Peterson&#039;s algorithm. It doesn&#039;t work on modern machines. The big idea is the memory hierarchy. Key assumption: when one processor writes to memory the other processors will see that write, that value. Is that true in modern systems? No. &lt;br /&gt;
&lt;br /&gt;
Concurrency Mechanisms:&lt;br /&gt;
&lt;br /&gt;
locks&lt;br /&gt;
condition variables&lt;br /&gt;
Semaphores&lt;br /&gt;
&lt;br /&gt;
you build all of these using the same mechanisms. Let&#039;s you check a value and change it in one atomic operation. The basic functionality of the hardware. The semantics are different in these higher level constructs. When do you use what? What are the purpose of these things? there&#039;s huge amounts of overlap. what you are talking about with these things, is that you have some sort of storage that has atomic operations so you can check it&#039;s value adn change it, all at one time (so you don&#039;t have race conditions when you manipulate the value) then you have a queue of threads.&lt;br /&gt;
&lt;br /&gt;
The difference between these, is related to how you deal with the interaction between the storage and the queue of threads. &lt;br /&gt;
IF you have a lock, grab lock, and release the lock, does releasing the lock always succeed? Yes, if you have the lock, and you release it, it always succeeds. grab lock can block. when we talk about blocking, it&#039;s going to sit there and wait for the condition to happen, nothing is going to happen to that thread until that condition is met. You often have a try lock, try lock means check to see if the lock is grabbable or not. it will grab the lock if it&#039;s available, but if it&#039;s not available it will return, it&#039;s non-blocking, because maybe you don&#039;t want to sit there and block. Most of the time you will try to grab the lock and if the lock fails, it will wait until grabbing the lock succeeds.&lt;br /&gt;
&lt;br /&gt;
what if 5 processes try to grab the lock all at once, what do the rest do? they are all blocked, there are two ways for them to wait. One is to just go into an infinite loop, it just keeps trying trying trying until it succeeds. It is sometimes a good idea, this is called a spin lock. If the CPU with running the spin lock, doing nothing else, it is useless. Not very good for making use of efficient resources. You expect the time for you to wait is very small. no one is going to be using hte lock structure for very long. it&#039;s better to sit there and wait. It&#039;s like if you go to a gov&#039;t office, and you want something to happen, you will probably have to wait in line. When you are spin locking, you aresitting in the queue doing nothing. What you can also do instead is you can have a wait queue: in here we say, they queue up, they wait, you are freeing up the resources to do something else. the particular processor running the thread, it encounters things behind it, it puts that thread to sleep. what does sleep mean? Sleep means, the thread goes to sleep that implies something about the scheduling of the thread. IT is kicked off the cpu it&#039;s running on. I am not going to consume any resources right now, when you wake up, in biological terms, it&#039;s still consuming resources. It&#039;s still consuming space resources (still consuming ram, still living in cache but it is not being actively run by a core. When you say a thread yields, I don&#039;t need this CPU anymore. when you call sleep you are making a system call that says, I don&#039;t want to do anything for at least 10 seconds. You are taken out of the list of running processes, you are put on the list of block processes, and you wait for the timer to go off. Then your process is brought back in and is allowed to run. It depends on the time of the CPU. In general you should sleep at least for that amount of time.&lt;br /&gt;
&lt;br /&gt;
[[File:locks.png]]&lt;br /&gt;
&lt;br /&gt;
When we are talking about locks we are talking about multiple threads. Processes do not have shared storage. It&#039;s logically separate copies. For threads they share this address space, and use this to coordinate their operations. The lock is not available, it queues up waiting for the lock. You have these threads lined up waiting for the service &lt;br /&gt;
&lt;br /&gt;
Wait = sleep&lt;br /&gt;
&lt;br /&gt;
if it is actually actively waiting running on the cpu that is it spinning, you don&#039;t queue up with spinning, because it&#039;s every core is just sitting there spinning. For 5 threads to be blocked on a spin lock, all 5 threads are running on 5 cores. You don&#039;t use spin locks on things you expect to wait long for. Sometimes you want to use a spin lock. Because it takes effort to put it to sleep. It&#039;s a relatively expensive thing to put a thread to sleep, and then have it wake up. When you put a thread to sleep it potentially can involve a context switch, it depends on how threads are implemented. It involves a system call, because then you call the sleep(). If you are going to be waiting less than the time it takes to execute 10 - 50 instructions, you want to use a spin lock. &lt;br /&gt;
&lt;br /&gt;
with a lock all the threads queue up, then when the one that was active gets done. It&#039;s a fifo queue. They take their turns. &lt;br /&gt;
&lt;br /&gt;
[[File:Conditionvariables.png]]&lt;br /&gt;
&lt;br /&gt;
A condition variable has the same basic idea. we have a condition that we want to check and we have a wait queue, but the difference is, is in how we treat the queue. I&#039;m waiting for something to happen, a key to be pressed / waiting for something to come in for the network, waiting for something to happen, the other threads too have gotten so far in the computation, what you are doing is check condition (blocking), check condition (non-blocking) and set condition. - seems kind of the same as a lock. What&#039;s really happening here is the normal expected thing when you check for the condition, is that you normally do the blocking one. i&#039;m going to sleep until the condition has happened. The set condition says that it happened. The difference lies in how you treat the queue. with a lock you are waiting for exclusive access to something (an entire data structure, a tree or something) Only one thread can access the resource at a time, enforcing mutual exclusion, we want serial semantics. With a condition variable we are not trying to enforce serial semantics. Tell everyone! When the condition becomes true, everyone in the queue wakes up and gets service. Mechanism basic pieces underlying is the same but the semantics are different. Think of a club, are the doors open? Everyone flows in. Instead of sitting at the passport office, 1 person goes up, another person goes up. In a concurrent appplication you will use these different constructs. It&#039;s generalyl about protecting access to a datastructure. where as a condition variable is about waiting for some state of computation / state of the environment has come to pass. In the locks its saying can I get exclusive access to the resource? Normally in code that uses locks, there will be some mutex. (mutual exclusion) binary semaphore&lt;br /&gt;
&lt;br /&gt;
[[File:semaphore.png]]&lt;br /&gt;
&lt;br /&gt;
semaphores : up / down  a general counting semaphore the value can be any integer. You can use semaphores to implement both condition variables and locks. The idea of a semaphore is: down blocks only if it is less than 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when you say up, you increment it back up. A lock in this case, is if you set the value to 1. Instead of saying 1 can be present, really a semaphore is like a bathroom, 4 people can use the bathroom at a time. When someone goes in, someone&#039;s come out, now someone can go back in, if the room is empty, there is a certain number of resources, you can go in, but if it fills up, everyone is waiting. In practice if you are looking at how people build things. People use mutexs all the time condition variables have clear semantics, semaphores are more of a building block, I want to only have so many threads running at the same time. Its&#039; much harder to reason about. You have to be very careful with your downs and ups. They are confusing but you should know about what they are.&lt;br /&gt;
&lt;br /&gt;
What&#039;s in the midterm, the basic coverage of the midterm is the assignments and the tutorials, with a focus on the assignments. The key thing with pthreads, it&#039;s supposed to be portable across unix like systems. The exact system calls used for this can vary. when you do pthreads, it&#039;s doing the low level assembly. that&#039;s present as well. it&#039;s a mix of things, it can be implemented in user space partly and partly in the kernel. The mapping between the lowl evel mechanisms and the api is non-trivial. When you look at the code, it&#039;s surprisingly complex. Through to assignment 5 is going to be on the midterm. If you look at assignment 5, it has bits and pieces of the lectures last week and this week.&lt;br /&gt;
&lt;br /&gt;
producer / consumer - you should think of a pipe. On the command line you could do these things like: sort foo | uniq&lt;br /&gt;
&lt;br /&gt;
it will sort the lines of the file, and the thing uniq will remove any duplicates of the lines. If I had another command to add in linebreaks for every word. pipe operator means :take stdout from one program, and provide it as stdin for the following program.&lt;br /&gt;
&lt;br /&gt;
We are talking about processes, the one on the left side of the pipe is the producer. The one on the right is the consumer. The pipe operation does not work on a file, it works on processes. ls you always think of as terminating, but you could do something that produces never ending values.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19379</id>
		<title>Operating Systems 2014F Lecture 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19379"/>
		<updated>2014-10-08T13:43:09Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Assignment 5 visit:&lt;br /&gt;
&lt;br /&gt;
[[File:Pagetableqass5.png]]&lt;br /&gt;
&lt;br /&gt;
By using base 16 vs. base 10 - yes the alphabet is larger. when you talk about hex vs. decimal, you are changing how we read it. There&#039;s something more correct about using hex vs. base 10. A Hexadecimal digit represents 4 bits. How many bits does base 10 represent? (somewhere between 8 and 16 - a fractional bit) it&#039;s messy. Decimal is not the native base of a computer. Base 16 maps perfectly to base 2. That&#039;s why you also see octal - 3 bits vs. 4 bits. (but a reduced range), when you use hexadecimal it represents 4 bits, and is a bit cleaner. &lt;br /&gt;
&lt;br /&gt;
When you say offset is a power of two, the page size is also a power of two. the storage used to encode the page size determines the range of the offset. It is a specific byte in a page. What is the range of bytes that can go in a page. Classic x86&lt;br /&gt;
&lt;br /&gt;
what is the range of offsets of a 4 k page? The offsets are 0 - 4095. Another thing that you will need for the assignment, it&#039;s a question that refers to Peterson&#039;s algorithm. It doesn&#039;t work on modern machines. The big idea is the memory hierarchy. Key assumption: when one processor writes to memory the other processors will see that write, that value. Is that true in modern systems? No. &lt;br /&gt;
&lt;br /&gt;
Concurrency Mechanisms:&lt;br /&gt;
&lt;br /&gt;
locks&lt;br /&gt;
condition variables&lt;br /&gt;
Semaphores&lt;br /&gt;
&lt;br /&gt;
you build all of these using the same mechanisms. Let&#039;s you check a value and change it in one atomic operation. The basic functionality of the hardware. The semantics are different in these higher level constructs. When do you use what? What are the purpose of these things? there&#039;s huge amounts of overlap. what you are talking about with these things, is that you have some sort of storage that has atomic operations so you can check it&#039;s value adn change it, all at one time (so you don&#039;t have race conditions when you manipulate the value) then you have a queue of threads.&lt;br /&gt;
&lt;br /&gt;
The difference between these, is related to how you deal with the interaction between the storage and the queue of threads. &lt;br /&gt;
IF you have a lock, grab lock, and release the lock, does releasing the lock always succeed? Yes, if you have the lock, and you release it, it always succeeds. grab lock can block. when we talk about blocking, it&#039;s going to sit there and wait for the condition to happen, nothing is going to happen to that thread until that condition is met. You often have a try lock, try lock means check to see if the lock is grabbable or not. it will grab the lock if it&#039;s available, but if it&#039;s not available it will return, it&#039;s non-blocking, because maybe you don&#039;t want to sit there and block. Most of the time you will try to grab the lock and if the lock fails, it will wait until grabbing the lock succeeds.&lt;br /&gt;
&lt;br /&gt;
what if 5 processes try to grab the lock all at once, what do the rest do? they are all blocked, there are two ways for them to wait. One is to just go into an infinite loop, it just keeps trying trying trying until it succeeds. It is sometimes a good idea, this is called a spin lock. If the CPU with running the spin lock, doing nothing else, it is useless. Not very good for making use of efficient resources. You expect the time for you to wait is very small. no one is going to be using hte lock structure for very long. it&#039;s better to sit there and wait. It&#039;s like if you go to a gov&#039;t office, and you want something to happen, you will probably have to wait in line. When you are spin locking, you aresitting in the queue doing nothing. What you can also do instead is you can have a wait queue: in here we say, they queue up, they wait, you are freeing up the resources to do something else. the particular processor running the thread, it encounters things behind it, it puts that thread to sleep. what does sleep mean? Sleep means, the thread goes to sleep that implies something about the scheduling of the thread. IT is kicked off the cpu it&#039;s running on. I am not going to consume any resources right now, when you wake up, in biological terms, it&#039;s still consuming resources. It&#039;s still consuming space resources (still consuming ram, still living in cache but it is not being actively run by a core. When you say a thread yields, I don&#039;t need this CPU anymore. when you call sleep you are making a system call that says, I don&#039;t want to do anything for at least 10 seconds. You are taken out of the list of running processes, you are put on the list of block processes, and you wait for the timer to go off. Then your process is brought back in and is allowed to run. It depends on the time of the CPU. In general you should sleep at least for that amount of time.&lt;br /&gt;
&lt;br /&gt;
[[File:locks.png]]&lt;br /&gt;
&lt;br /&gt;
When we are talking about locks we are talking about multiple threads. Processes do not have shared storage. It&#039;s logically separate copies. For threads they share this address space, and use this to coordinate their operations. The lock is not available, it queues up waiting for the lock. You have these threads lined up waiting for the service &lt;br /&gt;
&lt;br /&gt;
Wait = sleep&lt;br /&gt;
&lt;br /&gt;
if it is actually actively waiting running on the cpu that is it spinning, you don&#039;t queue up with spinning, because it&#039;s every core is just sitting there spinning. For 5 threads to be blocked on a spin lock, all 5 threads are running on 5 cores. You don&#039;t use spin locks on things you expect to wait long for. Sometimes you want to use a spin lock. Because it takes effort to put it to sleep. It&#039;s a relatively expensive thing to put a thread to sleep, and then have it wake up. When you put a thread to sleep it potentially can involve a context switch, it depends on how threads are implemented. It involves a system call, because then you call the sleep(). If you are going to be waiting less than the time it takes to execute 10 - 50 instructions, you want to use a spin lock. &lt;br /&gt;
&lt;br /&gt;
with a lock all the threads queue up, then when the one that was active gets done. It&#039;s a fifo queue. They take their turns. &lt;br /&gt;
&lt;br /&gt;
[[File:Conditionvariables.png]]&lt;br /&gt;
&lt;br /&gt;
A condition variable has the same basic idea. we have a condition that we want to check and we have a wait queue, but the difference is, is in how we treat the queue. I&#039;m waiting for something to happen, a key to be pressed / waiting for something to come in for the network, waiting for something to happen, the other threads too have gotten so far in the computation, what you are doing is check condition (blocking), check condition (non-blocking) and set condition. - seems kind of the same as a lock. What&#039;s really happening here is the normal expected thing when you check for the condition, is that you normally do the blocking one. i&#039;m going to sleep until the condition has happened. The set condition says that it happened. The difference lies in how you treat the queue. with a lock you are waiting for exclusive access to something (an entire data structure, a tree or something) Only one thread can access the resource at a time, enforcing mutual exclusion, we want serial semantics. With a condition variable we are not trying to enforce serial semantics. Tell everyone! When the condition becomes true, everyone in the queue wakes up and gets service. Mechanism basic pieces underlying is the same but the semantics are different. Think of a club, are the doors open? Everyone flows in. Instead of sitting at the passport office, 1 person goes up, another person goes up. In a concurrent appplication you will use these different constructs. It&#039;s generalyl about protecting access to a datastructure. where as a condition variable is about waiting for some state of computation / state of the environment has come to pass. In the locks its saying can I get exclusive access to the resource? Normally in code that uses locks, there will be some mutex. (mutual exclusion) binary semaphore&lt;br /&gt;
&lt;br /&gt;
[[File:semaphore.png]]&lt;br /&gt;
&lt;br /&gt;
semaphores : up / down  a general counting semaphore the value can be any integer. You can use semaphores to implement both condition variables and locks. The idea of a semaphore is: down blocks only if it is less than 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when you say up, you increment it back up. A lock in this case, is if you set the value to 1. Instead of saying 1 can be present, really a semaphore is like a bathroom, 4 people can use the bathroom at a time. When someone goes in, someone&#039;s come out, now someone can go back in, if the room is empty, there is a certain number of resources, you can go in, but if it fills up, everyone is waiting. In practice if you are looking at how people build things. People use mutexs all the time condition variables have clear semantics, semaphores are more of a building block, I want to only have so many threads running at the same time. Its&#039; much harder to reason about. You have to be very careful with your downs and ups. They are confusing but you should know about what they are.&lt;br /&gt;
&lt;br /&gt;
What&#039;s in the midterm, the basic coverage of the midterm is the assignments and the tutorials, with a focus on the assignments. The key thing with pthreads, it&#039;s supposed to be portable across unix like systems. The exact system calls used for this can vary. when you do pthreads, it&#039;s doing the low level assembly. that&#039;s present as well. it&#039;s a mix of things, it can be implemented in user space partly and partly in the kernel. The mapping between the lowl evel mechanisms and the api is non-trivial. When you look at the code, it&#039;s surprisingly complex. Through to assignment 5 is going to be on the midterm. If you look at assignment 5, it has bits and pieces of the lectures last week and this week.&lt;br /&gt;
&lt;br /&gt;
producer / consumer - you should think of a pipe. On the command line you could do these things like: sort foo | uniq&lt;br /&gt;
&lt;br /&gt;
it will sort the lines of the file, and the thing uniq will remove any duplicates of the lines. If I had another command to add in linebreaks for every word. pipe operator means :take stdout from one program, and provide it as stdin for the following program.&lt;br /&gt;
&lt;br /&gt;
We are talking about processes, the one on the left side of the pipe is the producer. The one on the right is the consumer. The pipe operation does not work on a file, it works on processes.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:Pagetableqass5.png&amp;diff=19378</id>
		<title>File:Pagetableqass5.png</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:Pagetableqass5.png&amp;diff=19378"/>
		<updated>2014-10-08T13:39:19Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19377</id>
		<title>Operating Systems 2014F Lecture 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19377"/>
		<updated>2014-10-08T13:39:06Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Assignment 5 visit:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
By using base 16 vs. base 10 - yes the alphabet is larger. when you talk about hex vs. decimal, you are changing how we read it. There&#039;s something more correct about using hex vs. base 10. A Hexadecimal digit represents 4 bits. How many bits does base 10 represent? (somewhere between 8 and 16 - a fractional bit) it&#039;s messy. Decimal is not the native base of a computer. Base 16 maps perfectly to base 2. That&#039;s why you also see octal - 3 bits vs. 4 bits. (but a reduced range), when you use hexadecimal it represents 4 bits, and is a bit cleaner. &lt;br /&gt;
&lt;br /&gt;
When you say offset is a power of two, the page size is also a power of two. the storage used to encode the page size determines the range of the offset. It is a specific byte in a page. What is the range of bytes that can go in a page. Classic x86&lt;br /&gt;
&lt;br /&gt;
what is the range of offsets of a 4 k page? The offsets are 0 - 4095. Another thing that you will need for the assignment, it&#039;s a question that refers to Peterson&#039;s algorithm. It doesn&#039;t work on modern machines. The big idea is the memory hierarchy. Key assumption: when one processor writes to memory the other processors will see that write, that value. Is that true in modern systems? No. &lt;br /&gt;
&lt;br /&gt;
Concurrency Mechanisms:&lt;br /&gt;
&lt;br /&gt;
locks&lt;br /&gt;
condition variables&lt;br /&gt;
Semaphores&lt;br /&gt;
&lt;br /&gt;
you build all of these using the same mechanisms. Let&#039;s you check a value and change it in one atomic operation. The basic functionality of the hardware. The semantics are different in these higher level constructs. When do you use what? What are the purpose of these things? there&#039;s huge amounts of overlap. what you are talking about with these things, is that you have some sort of storage that has atomic operations so you can check it&#039;s value adn change it, all at one time (so you don&#039;t have race conditions when you manipulate the value) then you have a queue of threads.&lt;br /&gt;
&lt;br /&gt;
The difference between these, is related to how you deal with the interaction between the storage and the queue of threads. &lt;br /&gt;
IF you have a lock, grab lock, and release the lock, does releasing the lock always succeed? Yes, if you have the lock, and you release it, it always succeeds. grab lock can block. when we talk about blocking, it&#039;s going to sit there and wait for the condition to happen, nothing is going to happen to that thread until that condition is met. You often have a try lock, try lock means check to see if the lock is grabbable or not. it will grab the lock if it&#039;s available, but if it&#039;s not available it will return, it&#039;s non-blocking, because maybe you don&#039;t want to sit there and block. Most of the time you will try to grab the lock and if the lock fails, it will wait until grabbing the lock succeeds.&lt;br /&gt;
&lt;br /&gt;
what if 5 processes try to grab the lock all at once, what do the rest do? they are all blocked, there are two ways for them to wait. One is to just go into an infinite loop, it just keeps trying trying trying until it succeeds. It is sometimes a good idea, this is called a spin lock. If the CPU with running the spin lock, doing nothing else, it is useless. Not very good for making use of efficient resources. You expect the time for you to wait is very small. no one is going to be using hte lock structure for very long. it&#039;s better to sit there and wait. It&#039;s like if you go to a gov&#039;t office, and you want something to happen, you will probably have to wait in line. When you are spin locking, you aresitting in the queue doing nothing. What you can also do instead is you can have a wait queue: in here we say, they queue up, they wait, you are freeing up the resources to do something else. the particular processor running the thread, it encounters things behind it, it puts that thread to sleep. what does sleep mean? Sleep means, the thread goes to sleep that implies something about the scheduling of the thread. IT is kicked off the cpu it&#039;s running on. I am not going to consume any resources right now, when you wake up, in biological terms, it&#039;s still consuming resources. It&#039;s still consuming space resources (still consuming ram, still living in cache but it is not being actively run by a core. When you say a thread yields, I don&#039;t need this CPU anymore. when you call sleep you are making a system call that says, I don&#039;t want to do anything for at least 10 seconds. You are taken out of the list of running processes, you are put on the list of block processes, and you wait for the timer to go off. Then your process is brought back in and is allowed to run. It depends on the time of the CPU. In general you should sleep at least for that amount of time.&lt;br /&gt;
&lt;br /&gt;
[[File:locks.png]]&lt;br /&gt;
&lt;br /&gt;
When we are talking about locks we are talking about multiple threads. Processes do not have shared storage. It&#039;s logically separate copies. For threads they share this address space, and use this to coordinate their operations. The lock is not available, it queues up waiting for the lock. You have these threads lined up waiting for the service &lt;br /&gt;
&lt;br /&gt;
Wait = sleep&lt;br /&gt;
&lt;br /&gt;
if it is actually actively waiting running on the cpu that is it spinning, you don&#039;t queue up with spinning, because it&#039;s every core is just sitting there spinning. For 5 threads to be blocked on a spin lock, all 5 threads are running on 5 cores. You don&#039;t use spin locks on things you expect to wait long for. Sometimes you want to use a spin lock. Because it takes effort to put it to sleep. It&#039;s a relatively expensive thing to put a thread to sleep, and then have it wake up. When you put a thread to sleep it potentially can involve a context switch, it depends on how threads are implemented. It involves a system call, because then you call the sleep(). If you are going to be waiting less than the time it takes to execute 10 - 50 instructions, you want to use a spin lock. &lt;br /&gt;
&lt;br /&gt;
with a lock all the threads queue up, then when the one that was active gets done. It&#039;s a fifo queue. They take their turns. &lt;br /&gt;
&lt;br /&gt;
[[File:Conditionvariables.png]]&lt;br /&gt;
&lt;br /&gt;
A condition variable has the same basic idea. we have a condition that we want to check and we have a wait queue, but the difference is, is in how we treat the queue. I&#039;m waiting for something to happen, a key to be pressed / waiting for something to come in for the network, waiting for something to happen, the other threads too have gotten so far in the computation, what you are doing is check condition (blocking), check condition (non-blocking) and set condition. - seems kind of the same as a lock. What&#039;s really happening here is the normal expected thing when you check for the condition, is that you normally do the blocking one. i&#039;m going to sleep until the condition has happened. The set condition says that it happened. The difference lies in how you treat the queue. with a lock you are waiting for exclusive access to something (an entire data structure, a tree or something) Only one thread can access the resource at a time, enforcing mutual exclusion, we want serial semantics. With a condition variable we are not trying to enforce serial semantics. Tell everyone! When the condition becomes true, everyone in the queue wakes up and gets service. Mechanism basic pieces underlying is the same but the semantics are different. Think of a club, are the doors open? Everyone flows in. Instead of sitting at the passport office, 1 person goes up, another person goes up. In a concurrent appplication you will use these different constructs. It&#039;s generalyl about protecting access to a datastructure. where as a condition variable is about waiting for some state of computation / state of the environment has come to pass. In the locks its saying can I get exclusive access to the resource? Normally in code that uses locks, there will be some mutex. (mutual exclusion) binary semaphore&lt;br /&gt;
&lt;br /&gt;
[[File:semaphore.png]]&lt;br /&gt;
&lt;br /&gt;
semaphores : up / down  a general counting semaphore the value can be any integer. You can use semaphores to implement both condition variables and locks. The idea of a semaphore is: down blocks only if it is less than 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when you say up, you increment it back up. A lock in this case, is if you set the value to 1. Instead of saying 1 can be present, really a semaphore is like a bathroom, 4 people can use the bathroom at a time. When someone goes in, someone&#039;s come out, now someone can go back in, if the room is empty, there is a certain number of resources, you can go in, but if it fills up, everyone is waiting. In practice if you are looking at how people build things. People use mutexs all the time condition variables have clear semantics, semaphores are more of a building block, I want to only have so many threads running at the same time. Its&#039; much harder to reason about. You have to be very careful with your downs and ups. They are confusing but you should know about what they are.&lt;br /&gt;
&lt;br /&gt;
What&#039;s in the midterm, the basic coverage of the midterm is the assignments and the tutorials, with a focus on the assignments. The key thing with pthreads, it&#039;s supposed to be portable across unix like systems. The exact system calls used for this can vary. when you do pthreads, it&#039;s doing the low level assembly. that&#039;s present as well. it&#039;s a mix of things, it can be implemented in user space partly and partly in the kernel. The mapping between the lowl evel mechanisms and the api is non-trivial. When you look at the code, it&#039;s surprisingly complex. Through to assignment 5 is going to be on the midterm. If you look at assignment 5, it has bits and pieces of the lectures last week and this week.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19376</id>
		<title>Operating Systems 2014F Lecture 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19376"/>
		<updated>2014-10-08T13:32:41Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;By using base 16 vs. base 10 - yes the alphabet is larger. when you talk about hex vs. decimal, you are changing how we read it. There&#039;s something more correct about using hex vs. base 10. A Hexadecimal digit represents 4 bits. How many bits does base 10 represent? (somewhere between 8 and 16 - a fractional bit) it&#039;s messy. Decimal is not the native base of a computer. Base 16 maps perfectly to base 2. That&#039;s why you also see octal - 3 bits vs. 4 bits. (but a reduced range), when you use hexadecimal it represents 4 bits, and is a bit cleaner. &lt;br /&gt;
&lt;br /&gt;
When you say offset is a power of two, the page size is also a power of two. the storage used to encode the page size determines the range of the offset. It is a specific byte in a page. What is the range of bytes that can go in a page. Classic x86&lt;br /&gt;
&lt;br /&gt;
what is the range of offsets of a 4 k page? The offsets are 0 - 4095. Another thing that you will need for the assignment, it&#039;s a question that refers to Peterson&#039;s algorithm. It doesn&#039;t work on modern machines. The big idea is the memory hierarchy. Key assumption: when one processor writes to memory the other processors will see that write, that value. Is that true in modern systems? No. &lt;br /&gt;
&lt;br /&gt;
Concurrency Mechanisms:&lt;br /&gt;
&lt;br /&gt;
locks&lt;br /&gt;
condition variables&lt;br /&gt;
Semaphores&lt;br /&gt;
&lt;br /&gt;
you build all of these using the same mechanisms. Let&#039;s you check a value and change it in one atomic operation. The basic functionality of the hardware. The semantics are different in these higher level constructs. When do you use what? What are the purpose of these things? there&#039;s huge amounts of overlap. what you are talking about with these things, is that you have some sort of storage that has atomic operations so you can check it&#039;s value adn change it, all at one time (so you don&#039;t have race conditions when you manipulate the value) then you have a queue of threads.&lt;br /&gt;
&lt;br /&gt;
The difference between these, is related to how you deal with the interaction between the storage and the queue of threads. &lt;br /&gt;
IF you have a lock, grab lock, and release the lock, does releasing the lock always succeed? Yes, if you have the lock, and you release it, it always succeeds. grab lock can block. when we talk about blocking, it&#039;s going to sit there and wait for the condition to happen, nothing is going to happen to that thread until that condition is met. You often have a try lock, try lock means check to see if the lock is grabbable or not. it will grab the lock if it&#039;s available, but if it&#039;s not available it will return, it&#039;s non-blocking, because maybe you don&#039;t want to sit there and block. Most of the time you will try to grab the lock and if the lock fails, it will wait until grabbing the lock succeeds.&lt;br /&gt;
&lt;br /&gt;
what if 5 processes try to grab the lock all at once, what do the rest do? they are all blocked, there are two ways for them to wait. One is to just go into an infinite loop, it just keeps trying trying trying until it succeeds. It is sometimes a good idea, this is called a spin lock. If the CPU with running the spin lock, doing nothing else, it is useless. Not very good for making use of efficient resources. You expect the time for you to wait is very small. no one is going to be using hte lock structure for very long. it&#039;s better to sit there and wait. It&#039;s like if you go to a gov&#039;t office, and you want something to happen, you will probably have to wait in line. When you are spin locking, you aresitting in the queue doing nothing. What you can also do instead is you can have a wait queue: in here we say, they queue up, they wait, you are freeing up the resources to do something else. the particular processor running the thread, it encounters things behind it, it puts that thread to sleep. what does sleep mean? Sleep means, the thread goes to sleep that implies something about the scheduling of the thread. IT is kicked off the cpu it&#039;s running on. I am not going to consume any resources right now, when you wake up, in biological terms, it&#039;s still consuming resources. It&#039;s still consuming space resources (still consuming ram, still living in cache but it is not being actively run by a core. When you say a thread yields, I don&#039;t need this CPU anymore. when you call sleep you are making a system call that says, I don&#039;t want to do anything for at least 10 seconds. You are taken out of the list of running processes, you are put on the list of block processes, and you wait for the timer to go off. Then your process is brought back in and is allowed to run. It depends on the time of the CPU. In general you should sleep at least for that amount of time.&lt;br /&gt;
&lt;br /&gt;
[[File:locks.png]]&lt;br /&gt;
&lt;br /&gt;
When we are talking about locks we are talking about multiple threads. Processes do not have shared storage. It&#039;s logically separate copies. For threads they share this address space, and use this to coordinate their operations. The lock is not available, it queues up waiting for the lock. You have these threads lined up waiting for the service &lt;br /&gt;
&lt;br /&gt;
Wait = sleep&lt;br /&gt;
&lt;br /&gt;
if it is actually actively waiting running on the cpu that is it spinning, you don&#039;t queue up with spinning, because it&#039;s every core is just sitting there spinning. For 5 threads to be blocked on a spin lock, all 5 threads are running on 5 cores. You don&#039;t use spin locks on things you expect to wait long for. Sometimes you want to use a spin lock. Because it takes effort to put it to sleep. It&#039;s a relatively expensive thing to put a thread to sleep, and then have it wake up. When you put a thread to sleep it potentially can involve a context switch, it depends on how threads are implemented. It involves a system call, because then you call the sleep(). If you are going to be waiting less than the time it takes to execute 10 - 50 instructions, you want to use a spin lock. &lt;br /&gt;
&lt;br /&gt;
with a lock all the threads queue up, then when the one that was active gets done. It&#039;s a fifo queue. They take their turns. &lt;br /&gt;
&lt;br /&gt;
[[File:Conditionvariables.png]]&lt;br /&gt;
&lt;br /&gt;
A condition variable has the same basic idea. we have a condition that we want to check and we have a wait queue, but the difference is, is in how we treat the queue. I&#039;m waiting for something to happen, a key to be pressed / waiting for something to come in for the network, waiting for something to happen, the other threads too have gotten so far in the computation, what you are doing is check condition (blocking), check condition (non-blocking) and set condition. - seems kind of the same as a lock. What&#039;s really happening here is the normal expected thing when you check for the condition, is that you normally do the blocking one. i&#039;m going to sleep until the condition has happened. The set condition says that it happened. The difference lies in how you treat the queue. with a lock you are waiting for exclusive access to something (an entire data structure, a tree or something) Only one thread can access the resource at a time, enforcing mutual exclusion, we want serial semantics. With a condition variable we are not trying to enforce serial semantics. Tell everyone! When the condition becomes true, everyone in the queue wakes up and gets service. Mechanism basic pieces underlying is the same but the semantics are different. Think of a club, are the doors open? Everyone flows in. Instead of sitting at the passport office, 1 person goes up, another person goes up. In a concurrent appplication you will use these different constructs. It&#039;s generalyl about protecting access to a datastructure. where as a condition variable is about waiting for some state of computation / state of the environment has come to pass. In the locks its saying can I get exclusive access to the resource? Normally in code that uses locks, there will be some mutex. (mutual exclusion) binary semaphore&lt;br /&gt;
&lt;br /&gt;
[[File:semaphore.png]]&lt;br /&gt;
&lt;br /&gt;
semaphores : up / down  a general counting semaphore the value can be any integer. You can use semaphores to implement both condition variables and locks. The idea of a semaphore is: down blocks only if it is less than 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when you say up, you increment it back up. A lock in this case, is if you set the value to 1. Instead of saying 1 can be present, really a semaphore is like a bathroom, 4 people can use the bathroom at a time. When someone goes in, someone&#039;s come out, now someone can go back in, if the room is empty, there is a certain number of resources, you can go in, but if it fills up, everyone is waiting. In practice if you are looking at how people build things. People use mutexs all the time condition variables have clear semantics, semaphores are more of a building block, I want to only have so many threads running at the same time. Its&#039; much harder to reason about. You have to be very careful with your downs and ups. They are confusing but you should know about what they are.&lt;br /&gt;
&lt;br /&gt;
What&#039;s in the midterm, the basic coverage of the midterm is the assignments and the tutorials, with a focus on the assignments. The key thing with pthreads, it&#039;s supposed to be portable across unix like systems. The exact system calls used for this can vary. when you do pthreads, it&#039;s doing the low level assembly. that&#039;s present as well. it&#039;s a mix of things, it can be implemented in user space partly and partly in the kernel. The mapping between the lowl evel mechanisms and the api is non-trivial. When you look at the code, it&#039;s surprisingly complex. Through to assignment 5 is going to be on the midterm. If you look at assignment 5, it has bits and pieces of the lectures last week and this week.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19375</id>
		<title>Operating Systems 2014F Lecture 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19375"/>
		<updated>2014-10-08T13:31:07Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;By using base 16 vs. base 10 - yes the alphabet is larger. when you talk about hex vs. decimal, you are changing how we read it. There&#039;s something more correct about using hex vs. base 10. A Hexadecimal digit represents 4 bits. How many bits does base 10 represent? (somewhere between 8 and 16 - a fractional bit) it&#039;s messy. Decimal is not the native base of a computer. Base 16 maps perfectly to base 2. That&#039;s why you also see octal - 3 bits vs. 4 bits. (but a reduced range), when you use hexadecimal it represents 4 bits, and is a bit cleaner. &lt;br /&gt;
&lt;br /&gt;
When you say offset is a power of two, the page size is also a power of two. the storage used to encode the page size determines the range of the offset. It is a specific byte in a page. What is the range of bytes that can go in a page. Classic x86&lt;br /&gt;
&lt;br /&gt;
what is the range of offsets of a 4 k page? The offsets are 0 - 4095. Another thing that you will need for the assignment, it&#039;s a question that refers to Peterson&#039;s algorithm. It doesn&#039;t work on modern machines. The big idea is the memory hierarchy. Key assumption: when one processor writes to memory the other processors will see that write, that value. Is that true in modern systems? No. &lt;br /&gt;
&lt;br /&gt;
Concurrency Mechanisms:&lt;br /&gt;
&lt;br /&gt;
locks&lt;br /&gt;
condition variables&lt;br /&gt;
Semaphores&lt;br /&gt;
&lt;br /&gt;
you build all of these using the same mechanisms. Let&#039;s you check a value and change it in one atomic operation. The basic functionality of the hardware. The semantics are different in these higher level constructs. When do you use what? What are the purpose of these things? there&#039;s huge amounts of overlap. what you are talking about with these things, is that you have some sort of storage that has atomic operations so you can check it&#039;s value adn change it, all at one time (so you don&#039;t have race conditions when you manipulate the value) then you have a queue of threads.&lt;br /&gt;
&lt;br /&gt;
The difference between these, is related to how you deal with the interaction between the storage and the queue of threads. &lt;br /&gt;
IF you have a lock, grab lock, and release the lock, does releasing the lock always succeed? Yes, if you have the lock, and you release it, it always succeeds. grab lock can block. when we talk about blocking, it&#039;s going to sit there and wait for the condition to happen, nothing is going to happen to that thread until that condition is met. You often have a try lock, try lock means check to see if the lock is grabbable or not. it will grab the lock if it&#039;s available, but if it&#039;s not available it will return, it&#039;s non-blocking, because maybe you don&#039;t want to sit there and block. Most of the time you will try to grab the lock and if the lock fails, it will wait until grabbing the lock succeeds.&lt;br /&gt;
&lt;br /&gt;
what if 5 processes try to grab the lock all at once, what do the rest do? they are all blocked, there are two ways for them to wait. One is to just go into an infinite loop, it just keeps trying trying trying until it succeeds. It is sometimes a good idea, this is called a spin lock. If the CPU with running the spin lock, doing nothing else, it is useless. Not very good for making use of efficient resources. You expect the time for you to wait is very small. no one is going to be using hte lock structure for very long. it&#039;s better to sit there and wait. It&#039;s like if you go to a gov&#039;t office, and you want something to happen, you will probably have to wait in line. When you are spin locking, you aresitting in the queue doing nothing. What you can also do instead is you can have a wait queue: in here we say, they queue up, they wait, you are freeing up the resources to do something else. the particular processor running the thread, it encounters things behind it, it puts that thread to sleep. what does sleep mean? Sleep means, the thread goes to sleep that implies something about the scheduling of the thread. IT is kicked off the cpu it&#039;s running on. I am not going to consume any resources right now, when you wake up, in biological terms, it&#039;s still consuming resources. It&#039;s still consuming space resources (still consuming ram, still living in cache but it is not being actively run by a core. When you say a thread yields, I don&#039;t need this CPU anymore. when you call sleep you are making a system call that says, I don&#039;t want to do anything for at least 10 seconds. You are taken out of the list of running processes, you are put on the list of block processes, and you wait for the timer to go off. Then your process is brought back in and is allowed to run. It depends on the time of the CPU. In general you should sleep at least for that amount of time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When we are talking about locks we are talking about multiple threads. Processes do not have shared storage. It&#039;s logically separate copies. For threads they share this address space, and use this to coordinate their operations. The lock is not available, it queues up waiting for the lock. You have these threads lined up waiting for the service &lt;br /&gt;
&lt;br /&gt;
Wait = sleep&lt;br /&gt;
&lt;br /&gt;
if it is actually actively waiting running on the cpu that is it spinning, you don&#039;t queue up with spinning, because it&#039;s every core is just sitting there spinning. For 5 threads to be blocked on a spin lock, all 5 threads are running on 5 cores. You don&#039;t use spin locks on things you expect to wait long for. Sometimes you want to use a spin lock. Because it takes effort to put it to sleep. It&#039;s a relatively expensive thing to put a thread to sleep, and then have it wake up. When you put a thread to sleep it potentially can involve a context switch, it depends on how threads are implemented. It involves a system call, because then you call the sleep(). If you are going to be waiting less than the time it takes to execute 10 - 50 instructions, you want to use a spin lock. &lt;br /&gt;
&lt;br /&gt;
with a lock all the threads queue up, then when the one that was active gets done. It&#039;s a fifo queue. They take their turns. &lt;br /&gt;
&lt;br /&gt;
[[File:Conditionvariables.png]]&lt;br /&gt;
&lt;br /&gt;
A condition variable has the same basic idea. we have a condition that we want to check and we have a wait queue, but the difference is, is in how we treat the queue. I&#039;m waiting for something to happen, a key to be pressed / waiting for something to come in for the network, waiting for something to happen, the other threads too have gotten so far in the computation, what you are doing is check condition (blocking), check condition (non-blocking) and set condition. - seems kind of the same as a lock. What&#039;s really happening here is the normal expected thing when you check for the condition, is that you normally do the blocking one. i&#039;m going to sleep until the condition has happened. The set condition says that it happened. The difference lies in how you treat the queue. with a lock you are waiting for exclusive access to something (an entire data structure, a tree or something) Only one thread can access the resource at a time, enforcing mutual exclusion, we want serial semantics. With a condition variable we are not trying to enforce serial semantics. Tell everyone! When the condition becomes true, everyone in the queue wakes up and gets service. Mechanism basic pieces underlying is the same but the semantics are different. Think of a club, are the doors open? Everyone flows in. Instead of sitting at the passport office, 1 person goes up, another person goes up. In a concurrent appplication you will use these different constructs. It&#039;s generalyl about protecting access to a datastructure. where as a condition variable is about waiting for some state of computation / state of the environment has come to pass. In the locks its saying can I get exclusive access to the resource? Normally in code that uses locks, there will be some mutex. (mutual exclusion) binary semaphore&lt;br /&gt;
&lt;br /&gt;
[[File:semaphore.png]]&lt;br /&gt;
&lt;br /&gt;
semaphores : up / down  a general counting semaphore the value can be any integer. You can use semaphores to implement both condition variables and locks. The idea of a semaphore is: down blocks only if it is less than 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when you say up, you increment it back up. A lock in this case, is if you set the value to 1. Instead of saying 1 can be present, really a semaphore is like a bathroom, 4 people can use the bathroom at a time. When someone goes in, someone&#039;s come out, now someone can go back in, if the room is empty, there is a certain number of resources, you can go in, but if it fills up, everyone is waiting. In practice if you are looking at how people build things. People use mutexs all the time condition variables have clear semantics, semaphores are more of a building block, I want to only have so many threads running at the same time. Its&#039; much harder to reason about. You have to be very careful with your downs and ups. They are confusing but you should know about what they are.&lt;br /&gt;
&lt;br /&gt;
What&#039;s in the midterm, the basic coverage of the midterm is the assignments and the tutorials, with a focus on the assignments. The key thing with pthreads, it&#039;s supposed to be portable across unix like systems. The exact system calls used for this can vary. when you do pthreads, it&#039;s doing the low level assembly. that&#039;s present as well. it&#039;s a mix of things, it can be implemented in user space partly and partly in the kernel. The mapping between the lowl evel mechanisms and the api is non-trivial. When you look at the code, it&#039;s surprisingly complex. Through to assignment 5 is going to be on the midterm. If you look at assignment 5, it has bits and pieces of the lectures last week and this week.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19374</id>
		<title>Operating Systems 2014F Lecture 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19374"/>
		<updated>2014-10-08T13:27:53Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;By using base 16 vs. base 10 - yes the alphabet is larger. when you talk about hex vs. decimal, you are changing how we read it. There&#039;s something more correct about using hex vs. base 10. A Hexadecimal digit represents 4 bits. How many bits does base 10 represent? (somewhere between 8 and 16 - a fractional bit) it&#039;s messy. Decimal is not the native base of a computer. Base 16 maps perfectly to base 2. That&#039;s why you also see octal - 3 bits vs. 4 bits. (but a reduced range), when you use hexadecimal it represents 4 bits, and is a bit cleaner. &lt;br /&gt;
&lt;br /&gt;
When you say offset is a power of two, the page size is also a power of two. the storage used to encode the page size determines the range of the offset. It is a specific byte in a page. What is the range of bytes that can go in a page. Classic x86&lt;br /&gt;
&lt;br /&gt;
what is the range of offsets of a 4 k page? The offsets are 0 - 4095. Another thing that you will need for the assignment, it&#039;s a question that refers to Peterson&#039;s algorithm. It doesn&#039;t work on modern machines. The big idea is the memory hierarchy. Key assumption: when one processor writes to memory the other processors will see that write, that value. Is that true in modern systems? No. &lt;br /&gt;
&lt;br /&gt;
Concurrency Mechanisms:&lt;br /&gt;
&lt;br /&gt;
locks&lt;br /&gt;
condition variables&lt;br /&gt;
Semaphores&lt;br /&gt;
&lt;br /&gt;
you build all of these using the same mechanisms. Let&#039;s you check a value and change it in one atomic operation. The basic functionality of the hardware. The semantics are different in these higher level constructs. When do you use what? What are the purpose of these things? there&#039;s huge amounts of overlap. what you are talking about with these things, is that you have some sort of storage that has atomic operations so you can check it&#039;s value adn change it, all at one time (so you don&#039;t have race conditions when you manipulate the value) then you have a queue of threads.&lt;br /&gt;
&lt;br /&gt;
The difference between these, is related to how you deal with the interaction between the storage and the queue of threads. &lt;br /&gt;
IF you have a lock, grab lock, and release the lock, does releasing the lock always succeed? Yes, if you have the lock, and you release it, it always succeeds. grab lock can block. when we talk about blocking, it&#039;s going to sit there and wait for the condition to happen, nothing is going to happen to that thread until that condition is met. You often have a try lock, try lock means check to see if the lock is grabbable or not. it will grab the lock if it&#039;s available, but if it&#039;s not available it will return, it&#039;s non-blocking, because maybe you don&#039;t want to sit there and block. Most of the time you will try to grab the lock and if the lock fails, it will wait until grabbing the lock succeeds.&lt;br /&gt;
&lt;br /&gt;
what if 5 processes try to grab the lock all at once, what do the rest do? they are all blocked, there are two ways for them to wait. One is to just go into an infinite loop, it just keeps trying trying trying until it succeeds. It is sometimes a good idea, this is called a spin lock. If the CPU with running the spin lock, doing nothing else, it is useless. Not very good for making use of efficient resources. You expect the time for you to wait is very small. no one is going to be using hte lock structure for very long. it&#039;s better to sit there and wait. It&#039;s like if you go to a gov&#039;t office, and you want something to happen, you will probably have to wait in line. When you are spin locking, you aresitting in the queue doing nothing. What you can also do instead is you can have a wait queue: in here we say, they queue up, they wait, you are freeing up the resources to do something else. the particular processor running the thread, it encounters things behind it, it puts that thread to sleep. what does sleep mean? Sleep means, the thread goes to sleep that implies something about the scheduling of the thread. IT is kicked off the cpu it&#039;s running on. I am not going to consume any resources right now, when you wake up, in biological terms, it&#039;s still consuming resources. It&#039;s still consuming space resources (still consuming ram, still living in cache but it is not being actively run by a core. When you say a thread yields, I don&#039;t need this CPU anymore. when you call sleep you are making a system call that says, I don&#039;t want to do anything for at least 10 seconds. You are taken out of the list of running processes, you are put on the list of block processes, and you wait for the timer to go off. Then your process is brought back in and is allowed to run. It depends on the time of the CPU. In general you should sleep at least for that amount of time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When we are talking about locks we are talking about multiple threads. Processes do not have shared storage. It&#039;s logically separate copies. For threads they share this address space, and use this to coordinate their operations. The lock is not available, it queues up waiting for the lock. You have these threads lined up waiting for the service &lt;br /&gt;
&lt;br /&gt;
Wait = sleep&lt;br /&gt;
&lt;br /&gt;
if it is actually actively waiting running on the cpu that is it spinning, you don&#039;t queue up with spinning, because it&#039;s every core is just sitting there spinning. For 5 threads to be blocked on a spin lock, all 5 threads are running on 5 cores. You don&#039;t use spin locks on things you expect to wait long for. Sometimes you want to use a spin lock. Because it takes effort to put it to sleep. It&#039;s a relatively expensive thing to put a thread to sleep, and then have it wake up. When you put a thread to sleep it potentially can involve a context switch, it depends on how threads are implemented. It involves a system call, because then you call the sleep(). If you are going to be waiting less than the time it takes to execute 10 - 50 instructions, you want to use a spin lock. &lt;br /&gt;
&lt;br /&gt;
with a lock all the threads queue up, then when the one that was active gets done. It&#039;s a fifo queue. They take their turns. &lt;br /&gt;
&lt;br /&gt;
[[File:Conditionvariables.png]]&lt;br /&gt;
&lt;br /&gt;
A condition variable has the same basic idea. we have a condition that we want to check and we have a wait queue, but the difference is, is in how we treat the queue. I&#039;m waiting for something to happen, a key to be pressed / waiting for something to come in for the network, waiting for something to happen, the other threads too have gotten so far in the computation, what you are doing is check condition (blocking), check condition (non-blocking) and set condition. - seems kind of the same as a lock. What&#039;s really happening here is the normal expected thing when you check for the condition, is that you normally do the blocking one. i&#039;m going to sleep until the condition has happened. The set condition says that it happened. The difference lies in how you treat the queue. with a lock you are waiting for exclusive access to something (an entire data structure, a tree or something) Only one thread can access the resource at a time, enforcing mutual exclusion, we want serial semantics. With a condition variable we are not trying to enforce serial semantics. Tell everyone! When the condition becomes true, everyone in the queue wakes up and gets service. Mechanism basic pieces underlying is the same but the semantics are different. Think of a club, are the doors open? Everyone flows in. Instead of sitting at the passport office, 1 person goes up, another person goes up. In a concurrent appplication you will use these different constructs. It&#039;s generalyl about protecting access to a datastructure. where as a condition variable is about waiting for some state of computation / state of the environment has come to pass. In the locks its saying can I get exclusive access to the resource? Normally in code that uses locks, there will be some mutex. (mutual exclusion) binary semaphore&lt;br /&gt;
&lt;br /&gt;
[[File:semaphore.png]]&lt;br /&gt;
&lt;br /&gt;
semaphores : up / down  a general counting semaphore the value can be any integer. You can use semaphores to implement both condition variables and locks. The idea of a semaphore is: down blocks only if it is less than 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when you say up, you increment it back up. A lock in this case, is if you set the value to 1. Instead of saying 1 can be present, really a semaphore is like a bathroom, 4 people can use the bathroom at a time. When someone goes in, someone&#039;s come out, now someone can go back in, if the room is empty, there is a certain number of resources, you can go in, but if it fills up, everyone is waiting. In practice if you are looking at how people build things. People use mutexs all the time condition variables have clear semantics, semaphores are more of a building block, I want to only have so many threads running at the same time. Its&#039; much harder to reason about. You have to be very careful with your downs and ups. They are confusing but you should know about what they are.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19373</id>
		<title>Operating Systems 2014F Lecture 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19373"/>
		<updated>2014-10-08T13:27:16Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;By using base 16 vs. base 10 - yes the alphabet is larger. when you talk about hex vs. decimal, you are changing how we read it. There&#039;s something more correct about using hex vs. base 10. A Hexadecimal digit represents 4 bits. How many bits does base 10 represent? (somewhere between 8 and 16 - a fractional bit) it&#039;s messy. Decimal is not the native base of a computer. Base 16 maps perfectly to base 2. That&#039;s why you also see octal - 3 bits vs. 4 bits. (but a reduced range), when you use hexadecimal it represents 4 bits, and is a bit cleaner. &lt;br /&gt;
&lt;br /&gt;
When you say offset is a power of two, the page size is also a power of two. the storage used to encode the page size determines the range of the offset. It is a specific byte in a page. What is the range of bytes that can go in a page. Classic x86&lt;br /&gt;
&lt;br /&gt;
what is the range of offsets of a 4 k page? The offsets are 0 - 4095. Another thing that you will need for the assignment, it&#039;s a question that refers to Peterson&#039;s algorithm. It doesn&#039;t work on modern machines. The big idea is the memory hierarchy. Key assumption: when one processor writes to memory the other processors will see that write, that value. Is that true in modern systems? No. &lt;br /&gt;
&lt;br /&gt;
Concurrency Mechanisms:&lt;br /&gt;
&lt;br /&gt;
locks&lt;br /&gt;
condition variables&lt;br /&gt;
Semaphores&lt;br /&gt;
&lt;br /&gt;
you build all of these using the same mechanisms. Let&#039;s you check a value and change it in one atomic operation. The basic functionality of the hardware. The semantics are different in these higher level constructs. When do you use what? What are the purpose of these things? there&#039;s huge amounts of overlap. what you are talking about with these things, is that you have some sort of storage that has atomic operations so you can check it&#039;s value adn change it, all at one time (so you don&#039;t have race conditions when you manipulate the value) then you have a queue of threads.&lt;br /&gt;
&lt;br /&gt;
The difference between these, is related to how you deal with the interaction between the storage and the queue of threads. &lt;br /&gt;
IF you have a lock, grab lock, and release the lock, does releasing the lock always succeed? Yes, if you have the lock, and you release it, it always succeeds. grab lock can block. when we talk about blocking, it&#039;s going to sit there and wait for the condition to happen, nothing is going to happen to that thread until that condition is met. You often have a try lock, try lock means check to see if the lock is grabbable or not. it will grab the lock if it&#039;s available, but if it&#039;s not available it will return, it&#039;s non-blocking, because maybe you don&#039;t want to sit there and block. Most of the time you will try to grab the lock and if the lock fails, it will wait until grabbing the lock succeeds.&lt;br /&gt;
&lt;br /&gt;
what if 5 processes try to grab the lock all at once, what do the rest do? they are all blocked, there are two ways for them to wait. One is to just go into an infinite loop, it just keeps trying trying trying until it succeeds. It is sometimes a good idea, this is called a spin lock. If the CPU with running the spin lock, doing nothing else, it is useless. Not very good for making use of efficient resources. You expect the time for you to wait is very small. no one is going to be using hte lock structure for very long. it&#039;s better to sit there and wait. It&#039;s like if you go to a gov&#039;t office, and you want something to happen, you will probably have to wait in line. When you are spin locking, you aresitting in the queue doing nothing. What you can also do instead is you can have a wait queue: in here we say, they queue up, they wait, you are freeing up the resources to do something else. the particular processor running the thread, it encounters things behind it, it puts that thread to sleep. what does sleep mean? Sleep means, the thread goes to sleep that implies something about the scheduling of the thread. IT is kicked off the cpu it&#039;s running on. I am not going to consume any resources right now, when you wake up, in biological terms, it&#039;s still consuming resources. It&#039;s still consuming space resources (still consuming ram, still living in cache but it is not being actively run by a core. When you say a thread yields, I don&#039;t need this CPU anymore. when you call sleep you are making a system call that says, I don&#039;t want to do anything for at least 10 seconds. You are taken out of the list of running processes, you are put on the list of block processes, and you wait for the timer to go off. Then your process is brought back in and is allowed to run. It depends on the time of the CPU. In general you should sleep at least for that amount of time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When we are talking about locks we are talking about multiple threads. Processes do not have shared storage. It&#039;s logically separate copies. For threads they share this address space, and use this to coordinate their operations. The lock is not available, it queues up waiting for the lock. You have these threads lined up waiting for the service &lt;br /&gt;
&lt;br /&gt;
Wait = sleep&lt;br /&gt;
&lt;br /&gt;
if it is actually actively waiting running on the cpu that is it spinning, you don&#039;t queue up with spinning, because it&#039;s every core is just sitting there spinning. For 5 threads to be blocked on a spin lock, all 5 threads are running on 5 cores. You don&#039;t use spin locks on things you expect to wait long for. Sometimes you want to use a spin lock. Because it takes effort to put it to sleep. It&#039;s a relatively expensive thing to put a thread to sleep, and then have it wake up. When you put a thread to sleep it potentially can involve a context switch, it depends on how threads are implemented. It involves a system call, because then you call the sleep(). If you are going to be waiting less than the time it takes to execute 10 - 50 instructions, you want to use a spin lock. &lt;br /&gt;
&lt;br /&gt;
with a lock all the threads queue up, then when the one that was active gets done. It&#039;s a fifo queue. They take their turns. &lt;br /&gt;
[[File:Conditionvariables.png]]&lt;br /&gt;
A condition variable has the same basic idea. we have a condition that we want to check and we have a wait queue, but the difference is, is in how we treat the queue. I&#039;m waiting for something to happen, a key to be pressed / waiting for something to come in for the network, waiting for something to happen, the other threads too have gotten so far in the computation, what you are doing is check condition (blocking), check condition (non-blocking) and set condition. - seems kind of the same as a lock. What&#039;s really happening here is the normal expected thing when you check for the condition, is that you normally do the blocking one. i&#039;m going to sleep until the condition has happened. The set condition says that it happened. The difference lies in how you treat the queue. with a lock you are waiting for exclusive access to something (an entire data structure, a tree or something) Only one thread can access the resource at a time, enforcing mutual exclusion, we want serial semantics. With a condition variable we are not trying to enforce serial semantics. Tell everyone! When the condition becomes true, everyone in the queue wakes up and gets service. Mechanism basic pieces underlying is the same but the semantics are different. Think of a club, are the doors open? Everyone flows in. Instead of sitting at the passport office, 1 person goes up, another person goes up. In a concurrent appplication you will use these different constructs. It&#039;s generalyl about protecting access to a datastructure. where as a condition variable is about waiting for some state of computation / state of the environment has come to pass. In the locks its saying can I get exclusive access to the resource? Normally in code that uses locks, there will be some mutex. (mutual exclusion) binary semaphore&lt;br /&gt;
&lt;br /&gt;
semaphores : up / down  a general counting semaphore the value can be any integer. You can use semaphores to implement both condition variables and locks. The idea of a semaphore is: down blocks only if it is less than 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when you say up, you increment it back up. A lock in this case, is if you set the value to 1. Instead of saying 1 can be present, really a semaphore is like a bathroom, 4 people can use the bathroom at a time. When someone goes in, someone&#039;s come out, now someone can go back in, if the room is empty, there is a certain number of resources, you can go in, but if it fills up, everyone is waiting. In practice if you are looking at how people build things. People use mutexs all the time condition variables have clear semantics, semaphores are more of a building block, I want to only have so many threads running at the same time. Its&#039; much harder to reason about. You have to be very careful with your downs and ups. They are confusing but you should know about what they are.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:Conditionvariables.png&amp;diff=19372</id>
		<title>File:Conditionvariables.png</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:Conditionvariables.png&amp;diff=19372"/>
		<updated>2014-10-08T13:25:53Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:Semaphore.png&amp;diff=19371</id>
		<title>File:Semaphore.png</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:Semaphore.png&amp;diff=19371"/>
		<updated>2014-10-08T13:25:43Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:Locks.png&amp;diff=19370</id>
		<title>File:Locks.png</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:Locks.png&amp;diff=19370"/>
		<updated>2014-10-08T13:25:29Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19369</id>
		<title>Operating Systems 2014F Lecture 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19369"/>
		<updated>2014-10-08T13:21:30Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;By using base 16 vs. base 10 - yes the alphabet is larger. when you talk about hex vs. decimal, you are changing how we read it. There&#039;s something more correct about using hex vs. base 10. A Hexadecimal digit represents 4 bits. How many bits does base 10 represent? (somewhere between 8 and 16 - a fractional bit) it&#039;s messy. Decimal is not the native base of a computer. Base 16 maps perfectly to base 2. That&#039;s why you also see octal - 3 bits vs. 4 bits. (but a reduced range), when you use hexadecimal it represents 4 bits, and is a bit cleaner. &lt;br /&gt;
&lt;br /&gt;
When you say offset is a power of two, the page size is also a power of two. the storage used to encode the page size determines the range of the offset. It is a specific byte in a page. What is the range of bytes that can go in a page. Classic x86&lt;br /&gt;
&lt;br /&gt;
what is the range of offsets of a 4 k page? The offsets are 0 - 4095. Another thing that you will need for the assignment, it&#039;s a question that refers to Peterson&#039;s algorithm. It doesn&#039;t work on modern machines. The big idea is the memory hierarchy. Key assumption: when one processor writes to memory the other processors will see that write, that value. Is that true in modern systems? No. &lt;br /&gt;
&lt;br /&gt;
Concurrency Mechanisms:&lt;br /&gt;
&lt;br /&gt;
locks&lt;br /&gt;
condition variables&lt;br /&gt;
Semaphores&lt;br /&gt;
&lt;br /&gt;
you build all of these using the same mechanisms. Let&#039;s you check a value and change it in one atomic operation. The basic functionality of the hardware. The semantics are different in these higher level constructs. When do you use what? What are the purpose of these things? there&#039;s huge amounts of overlap. what you are talking about with these things, is that you have some sort of storage that has atomic operations so you can check it&#039;s value adn change it, all at one time (so you don&#039;t have race conditions when you manipulate the value) then you have a queue of threads.&lt;br /&gt;
&lt;br /&gt;
The difference between these, is related to how you deal with the interaction between the storage and the queue of threads. &lt;br /&gt;
IF you have a lock, grab lock, and release the lock, does releasing the lock always succeed? Yes, if you have the lock, and you release it, it always succeeds. grab lock can block. when we talk about blocking, it&#039;s going to sit there and wait for the condition to happen, nothing is going to happen to that thread until that condition is met. You often have a try lock, try lock means check to see if the lock is grabbable or not. it will grab the lock if it&#039;s available, but if it&#039;s not available it will return, it&#039;s non-blocking, because maybe you don&#039;t want to sit there and block. Most of the time you will try to grab the lock and if the lock fails, it will wait until grabbing the lock succeeds.&lt;br /&gt;
&lt;br /&gt;
what if 5 processes try to grab the lock all at once, what do the rest do? they are all blocked, there are two ways for them to wait. One is to just go into an infinite loop, it just keeps trying trying trying until it succeeds. It is sometimes a good idea, this is called a spin lock. If the CPU with running the spin lock, doing nothing else, it is useless. Not very good for making use of efficient resources. You expect the time for you to wait is very small. no one is going to be using hte lock structure for very long. it&#039;s better to sit there and wait. It&#039;s like if you go to a gov&#039;t office, and you want something to happen, you will probably have to wait in line. When you are spin locking, you aresitting in the queue doing nothing. What you can also do instead is you can have a wait queue: in here we say, they queue up, they wait, you are freeing up the resources to do something else. the particular processor running the thread, it encounters things behind it, it puts that thread to sleep. what does sleep mean? Sleep means, the thread goes to sleep that implies something about the scheduling of the thread. IT is kicked off the cpu it&#039;s running on. I am not going to consume any resources right now, when you wake up, in biological terms, it&#039;s still consuming resources. It&#039;s still consuming space resources (still consuming ram, still living in cache but it is not being actively run by a core. When you say a thread yields, I don&#039;t need this CPU anymore. when you call sleep you are making a system call that says, I don&#039;t want to do anything for at least 10 seconds. You are taken out of the list of running processes, you are put on the list of block processes, and you wait for the timer to go off. Then your process is brought back in and is allowed to run. It depends on the time of the CPU. In general you should sleep at least for that amount of time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When we are talking about locks we are talking about multiple threads. Processes do not have shared storage. It&#039;s logically separate copies. For threads they share this address space, and use this to coordinate their operations. The lock is not available, it queues up waiting for the lock. You have these threads lined up waiting for the service &lt;br /&gt;
&lt;br /&gt;
Wait = sleep&lt;br /&gt;
&lt;br /&gt;
if it is actually actively waiting running on the cpu that is it spinning, you don&#039;t queue up with spinning, because it&#039;s every core is just sitting there spinning. For 5 threads to be blocked on a spin lock, all 5 threads are running on 5 cores. You don&#039;t use spin locks on things you expect to wait long for. Sometimes you want to use a spin lock. Because it takes effort to put it to sleep. It&#039;s a relatively expensive thing to put a thread to sleep, and then have it wake up. When you put a thread to sleep it potentially can involve a context switch, it depends on how threads are implemented. It involves a system call, because then you call the sleep(). If you are going to be waiting less than the time it takes to execute 10 - 50 instructions, you want to use a spin lock. &lt;br /&gt;
&lt;br /&gt;
with a lock all the threads queue up, then when the one that was active gets done. It&#039;s a fifo queue. They take their turns. &lt;br /&gt;
&lt;br /&gt;
A condition variable has the same basic idea. we have a condition that we want to check and we have a wait queue, but the difference is, is in how we treat the queue. I&#039;m waiting for something to happen, a key to be pressed / waiting for something to come in for the network, waiting for something to happen, the other threads too have gotten so far in the computation, what you are doing is check condition (blocking), check condition (non-blocking) and set condition. - seems kind of the same as a lock. What&#039;s really happening here is the normal expected thing when you check for the condition, is that you normally do the blocking one. i&#039;m going to sleep until the condition has happened. The set condition says that it happened. The difference lies in how you treat the queue. with a lock you are waiting for exclusive access to something (an entire data structure, a tree or something) Only one thread can access the resource at a time, enforcing mutual exclusion, we want serial semantics. With a condition variable we are not trying to enforce serial semantics. Tell everyone! When the condition becomes true, everyone in the queue wakes up and gets service. Mechanism basic pieces underlying is the same but the semantics are different. Think of a club, are the doors open? Everyone flows in. Instead of sitting at the passport office, 1 person goes up, another person goes up. In a concurrent appplication you will use these different constructs. It&#039;s generalyl about protecting access to a datastructure. where as a condition variable is about waiting for some state of computation / state of the environment has come to pass. In the locks its saying can I get exclusive access to the resource? Normally in code that uses locks, there will be some mutex. (mutual exclusion) binary semaphore&lt;br /&gt;
&lt;br /&gt;
semaphores : up / down  a general counting semaphore the value can be any integer. You can use semaphores to implement both condition variables and locks. The idea of a semaphore is: down blocks only if it is less than 0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when you say up, you increment it back up. A lock in this case, is if you set the value to 1. Instead of saying 1 can be present, really a semaphore is like a bathroom, 4 people can use the bathroom at a time. When someone goes in, someone&#039;s come out, now someone can go back in, if the room is empty, there is a certain number of resources, you can go in, but if it fills up, everyone is waiting. In practice if you are looking at how people build things. People use mutexs all the time condition variables have clear semantics, semaphores are more of a building block, I want to only have so many threads running at the same time. Its&#039; much harder to reason about. You have to be very careful with your downs and ups. They are confusing but you should know about what they are.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19368</id>
		<title>Operating Systems 2014F Lecture 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19368"/>
		<updated>2014-10-08T13:11:21Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;By using base 16 vs. base 10 - yes the alphabet is larger. when you talk about hex vs. decimal, you are changing how we read it. There&#039;s something more correct about using hex vs. base 10. A Hexadecimal digit represents 4 bits. How many bits does base 10 represent? (somewhere between 8 and 16 - a fractional bit) it&#039;s messy. Decimal is not the native base of a computer. Base 16 maps perfectly to base 2. That&#039;s why you also see octal - 3 bits vs. 4 bits. (but a reduced range), when you use hexadecimal it represents 4 bits, and is a bit cleaner. &lt;br /&gt;
&lt;br /&gt;
When you say offset is a power of two, the page size is also a power of two. the storage used to encode the page size determines the range of the offset. It is a specific byte in a page. What is the range of bytes that can go in a page. Classic x86&lt;br /&gt;
&lt;br /&gt;
what is the range of offsets of a 4 k page? The offsets are 0 - 4095. Another thing that you will need for the assignment, it&#039;s a question that refers to Peterson&#039;s algorithm. It doesn&#039;t work on modern machines. The big idea is the memory hierarchy. Key assumption: when one processor writes to memory the other processors will see that write, that value. Is that true in modern systems? No. &lt;br /&gt;
&lt;br /&gt;
Concurrency Mechanisms:&lt;br /&gt;
&lt;br /&gt;
locks&lt;br /&gt;
condition variables&lt;br /&gt;
Semaphores&lt;br /&gt;
&lt;br /&gt;
you build all of these using the same mechanisms. Let&#039;s you check a value and change it in one atomic operation. The basic functionality of the hardware. The semantics are different in these higher level constructs. When do you use what? What are the purpose of these things? there&#039;s huge amounts of overlap. what you are talking about with these things, is that you have some sort of storage that has atomic operations so you can check it&#039;s value adn change it, all at one time (so you don&#039;t have race conditions when you manipulate the value) then you have a queue of threads.&lt;br /&gt;
&lt;br /&gt;
The difference between these, is related to how you deal with the interaction between the storage and the queue of threads. &lt;br /&gt;
IF you have a lock, grab lock, and release the lock, does releasing the lock always succeed? Yes, if you have the lock, and you release it, it always succeeds. grab lock can block. when we talk about blocking, it&#039;s going to sit there and wait for the condition to happen, nothing is going to happen to that thread until that condition is met. You often have a try lock, try lock means check to see if the lock is grabbable or not. it will grab the lock if it&#039;s available, but if it&#039;s not available it will return, it&#039;s non-blocking, because maybe you don&#039;t want to sit there and block. Most of the time you will try to grab the lock and if the lock fails, it will wait until grabbing the lock succeeds.&lt;br /&gt;
&lt;br /&gt;
what if 5 processes try to grab the lock all at once, what do the rest do? they are all blocked, there are two ways for them to wait. One is to just go into an infinite loop, it just keeps trying trying trying until it succeeds. It is sometimes a good idea, this is called a spin lock. If the CPU with running the spin lock, doing nothing else, it is useless. Not very good for making use of efficient resources. You expect the time for you to wait is very small. no one is going to be using hte lock structure for very long. it&#039;s better to sit there and wait. It&#039;s like if you go to a gov&#039;t office, and you want something to happen, you will probably have to wait in line. When you are spin locking, you aresitting in the queue doing nothing. What you can also do instead is you can have a wait queue: in here we say, they queue up, they wait, you are freeing up the resources to do something else. the particular processor running the thread, it encounters things behind it, it puts that thread to sleep. what does sleep mean? Sleep means, the thread goes to sleep that implies something about the scheduling of the thread. IT is kicked off the cpu it&#039;s running on. I am not going to consume any resources right now, when you wake up, in biological terms, it&#039;s still consuming resources. It&#039;s still consuming space resources (still consuming ram, still living in cache but it is not being actively run by a core. When you say a thread yields, I don&#039;t need this CPU anymore. when you call sleep you are making a system call that says, I don&#039;t want to do anything for at least 10 seconds. You are taken out of the list of running processes, you are put on the list of block processes, and you wait for the timer to go off. Then your process is brought back in and is allowed to run. It depends on the time of the CPU. In general you should sleep at least for that amount of time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When we are talking about locks we are talking about multiple threads. Processes do not have shared storage. It&#039;s logically separate copies. For threads they share this address space, and use this to coordinate their operations. The lock is not available, it queues up waiting for the lock. You have these threads lined up waiting for the service &lt;br /&gt;
&lt;br /&gt;
Wait = sleep&lt;br /&gt;
&lt;br /&gt;
if it is actually actively waiting running on the cpu that is it spinning, you don&#039;t queue up with spinning, because it&#039;s every core is just sitting there spinning. For 5 threads to be blocked on a spin lock, all 5 threads are running on 5 cores. You don&#039;t use spin locks on things you expect to wait long for. Sometimes you want to use a spin lock. Because it takes effort to put it to sleep. It&#039;s a relatively expensive thing to put a thread to sleep, and then have it wake up. When you put a thread to sleep it potentially can involve a context switch, it depends on how threads are implemented. It involves a system call, because then you call the sleep(). If you are going to be waiting less than the time it takes to execute 10 - 50 instructions, you want to use a spin lock. &lt;br /&gt;
&lt;br /&gt;
with a lock all the threads queue up, then when the one that was active gets done. It&#039;s a fifo queue. They take their turns. &lt;br /&gt;
&lt;br /&gt;
A condition variable has the same basic idea. we have a condition that we want to check and we have a wait queue, but the difference is, is in how we treat the queue. I&#039;m waiting for something to happen, a key to be pressed / waiting for something to come in for the network, waiting for something to happen, the other threads too have gotten so far in the computation, what you are doing is check condition (blocking), check condition (non-blocking) and set condition. - seems kind of the same as a lock. What&#039;s really happening here is the normal expected thing when you check for the condition, is that you normally do the blocking one. i&#039;m going to sleep until the condition has happened. The set condition says that it happened. The difference lies in how you treat the queue. with a lock you are waiting for exclusive access to something (an entire data structure, a tree or something) Only one thread can access the resource at a time, enforcing mutual exclusion, we want serial semantics. With a condition variable we are not trying to enforce serial semantics. Tell everyone! When the condition becomes true, everyone in the queue wakes up and gets service.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19367</id>
		<title>Operating Systems 2014F Lecture 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19367"/>
		<updated>2014-10-08T13:06:02Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;By using base 16 vs. base 10 - yes the alphabet is larger. when you talk about hex vs. decimal, you are changing how we read it. There&#039;s something more correct about using hex vs. base 10. A Hexadecimal digit represents 4 bits. How many bits does base 10 represent? (somewhere between 8 and 16 - a fractional bit) it&#039;s messy. Decimal is not the native base of a computer. Base 16 maps perfectly to base 2. That&#039;s why you also see octal - 3 bits vs. 4 bits. (but a reduced range), when you use hexadecimal it represents 4 bits, and is a bit cleaner. &lt;br /&gt;
&lt;br /&gt;
When you say offset is a power of two, the page size is also a power of two. the storage used to encode the page size determines the range of the offset. It is a specific byte in a page. What is the range of bytes that can go in a page. Classic x86&lt;br /&gt;
&lt;br /&gt;
what is the range of offsets of a 4 k page? The offsets are 0 - 4095. Another thing that you will need for the assignment, it&#039;s a question that refers to Peterson&#039;s algorithm. It doesn&#039;t work on modern machines. The big idea is the memory hierarchy. Key assumption: when one processor writes to memory the other processors will see that write, that value. Is that true in modern systems? No. &lt;br /&gt;
&lt;br /&gt;
Concurrency Mechanisms:&lt;br /&gt;
&lt;br /&gt;
locks&lt;br /&gt;
condition variables&lt;br /&gt;
Semaphores&lt;br /&gt;
&lt;br /&gt;
you build all of these using the same mechanisms. Let&#039;s you check a value and change it in one atomic operation. The basic functionality of the hardware. The semantics are different in these higher level constructs. When do you use what? What are the purpose of these things? there&#039;s huge amounts of overlap. what you are talking about with these things, is that you have some sort of storage that has atomic operations so you can check it&#039;s value adn change it, all at one time (so you don&#039;t have race conditions when you manipulate the value) then you have a queue of threads.&lt;br /&gt;
&lt;br /&gt;
The difference between these, is related to how you deal with the interaction between the storage and the queue of threads. &lt;br /&gt;
IF you have a lock, grab lock, and release the lock, does releasing the lock always succeed? Yes, if you have the lock, and you release it, it always succeeds. grab lock can block. when we talk about blocking, it&#039;s going to sit there and wait for the condition to happen, nothing is going to happen to that thread until that condition is met. You often have a try lock, try lock means check to see if the lock is grabbable or not. it will grab the lock if it&#039;s available, but if it&#039;s not available it will return, it&#039;s non-blocking, because maybe you don&#039;t want to sit there and block. Most of the time you will try to grab the lock and if the lock fails, it will wait until grabbing the lock succeeds.&lt;br /&gt;
&lt;br /&gt;
what if 5 processes try to grab the lock all at once, what do the rest do? they are all blocked, there are two ways for them to wait. One is to just go into an infinite loop, it just keeps trying trying trying until it succeeds. It is sometimes a good idea, this is called a spin lock. If the CPU with running the spin lock, doing nothing else, it is useless. Not very good for making use of efficient resources. You expect the time for you to wait is very small. no one is going to be using hte lock structure for very long. it&#039;s better to sit there and wait. It&#039;s like if you go to a gov&#039;t office, and you want something to happen, you will probably have to wait in line. When you are spin locking, you aresitting in the queue doing nothing. What you can also do instead is you can have a wait queue: in here we say, they queue up, they wait, you are freeing up the resources to do something else. the particular processor running the thread, it encounters things behind it, it puts that thread to sleep. what does sleep mean? Sleep means, the thread goes to sleep that implies something about the scheduling of the thread. IT is kicked off the cpu it&#039;s running on. I am not going to consume any resources right now, when you wake up, in biological terms, it&#039;s still consuming resources. It&#039;s still consuming space resources (still consuming ram, still living in cache but it is not being actively run by a core. When you say a thread yields, I don&#039;t need this CPU anymore. when you call sleep you are making a system call that says, I don&#039;t want to do anything for at least 10 seconds. You are taken out of the list of running processes, you are put on the list of block processes, and you wait for the timer to go off. Then your process is brought back in and is allowed to run. It depends on the time of the CPU. In general you should sleep at least for that amount of time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When we are talking about locks we are talking about multiple threads. Processes do not have shared storage. It&#039;s logically separate copies. For threads they share this address space, and use this to coordinate their operations. The lock is not available, it queues up waiting for the lock. You have these threads lined up waiting for the service &lt;br /&gt;
&lt;br /&gt;
Wait = sleep&lt;br /&gt;
&lt;br /&gt;
if it is actually actively waiting running on the cpu that is it spinning, you don&#039;t queue up with spinning, because it&#039;s every core is just sitting there spinning. For 5 threads to be blocked on a spin lock, all 5 threads are running on 5 cores. You don&#039;t use spin locks on things you expect to wait long for. Sometimes you want to use a spin lock. Because it takes effort to put it to sleep. It&#039;s a relatively expensive thing to put a thread to sleep, and then have it wake up. When you put a thread to sleep it potentially can involve a context switch, it depends on how threads are implemented. It involves a system call, because then you call the sleep().&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19366</id>
		<title>Operating Systems 2014F Lecture 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19366"/>
		<updated>2014-10-08T13:02:33Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;By using base 16 vs. base 10 - yes the alphabet is larger. when you talk about hex vs. decimal, you are changing how we read it. There&#039;s something more correct about using hex vs. base 10. A Hexadecimal digit represents 4 bits. How many bits does base 10 represent? (somewhere between 8 and 16 - a fractional bit) it&#039;s messy. Decimal is not the native base of a computer. Base 16 maps perfectly to base 2. That&#039;s why you also see octal - 3 bits vs. 4 bits. (but a reduced range), when you use hexadecimal it represents 4 bits, and is a bit cleaner. &lt;br /&gt;
&lt;br /&gt;
When you say offset is a power of two, the page size is also a power of two. the storage used to encode the page size determines the range of the offset. It is a specific byte in a page. What is the range of bytes that can go in a page. Classic x86&lt;br /&gt;
&lt;br /&gt;
what is the range of offsets of a 4 k page? The offsets are 0 - 4095. Another thing that you will need for the assignment, it&#039;s a question that refers to Peterson&#039;s algorithm. It doesn&#039;t work on modern machines. The big idea is the memory hierarchy. Key assumption: when one processor writes to memory the other processors will see that write, that value. Is that true in modern systems? No. &lt;br /&gt;
&lt;br /&gt;
Concurrency Mechanisms:&lt;br /&gt;
&lt;br /&gt;
locks&lt;br /&gt;
condition variables&lt;br /&gt;
Semaphores&lt;br /&gt;
&lt;br /&gt;
you build all of these using the same mechanisms. Let&#039;s you check a value and change it in one atomic operation. The basic functionality of the hardware. The semantics are different in these higher level constructs. When do you use what? What are the purpose of these things? there&#039;s huge amounts of overlap. what you are talking about with these things, is that you have some sort of storage that has atomic operations so you can check it&#039;s value adn change it, all at one time (so you don&#039;t have race conditions when you manipulate the value) then you have a queue of threads.&lt;br /&gt;
&lt;br /&gt;
The difference between these, is related to how you deal with the interaction between the storage and the queue of threads. &lt;br /&gt;
IF you have a lock, grab lock, and release the lock, does releasing the lock always succeed? Yes, if you have the lock, and you release it, it always succeeds. grab lock can block. when we talk about blocking, it&#039;s going to sit there and wait for the condition to happen, nothing is going to happen to that thread until that condition is met. You often have a try lock, try lock means check to see if the lock is grabbable or not. it will grab the lock if it&#039;s available, but if it&#039;s not available it will return, it&#039;s non-blocking, because maybe you don&#039;t want to sit there and block. Most of the time you will try to grab the lock and if the lock fails, it will wait until grabbing the lock succeeds.&lt;br /&gt;
&lt;br /&gt;
what if 5 processes try to grab the lock all at once, what do the rest do? they are all blocked, there are two ways for them to wait. One is to just go into an infinite loop, it just keeps trying trying trying until it succeeds. It is sometimes a good idea, this is called a spin lock. If the CPU with running the spin lock, doing nothing else, it is useless. Not very good for making use of efficient resources. You expect the time for you to wait is very small. no one is going to be using hte lock structure for very long. it&#039;s better to sit there and wait. It&#039;s like if you go to a gov&#039;t office, and you want something to happen, you will probably have to wait in line. When you are spin locking, you aresitting in the queue doing nothing. What you can also do instead is you can have a wait queue: in here we say, they queue up, they wait, you are freeing up the resources to do something else. the particular processor running the thread, it encounters things behind it, it puts that thread to sleep. what does sleep mean? Sleep means, the thread goes to sleep that implies something about the scheduling of the thread. IT is kicked off the cpu it&#039;s running on. I am not going to consume any resources right now, when you wake up, in biological terms, it&#039;s still consuming resources. It&#039;s still consuming space resources (still consuming ram, still living in cache but it is not being actively run by a core. When you say a thread yields, I don&#039;t need this CPU anymore. when you call sleep you are making a system call that says, I don&#039;t want to do anything for at least 10 seconds. You are taken out of the list of running processes, you are put on the list of block processes, and you wait for the timer to go off. Then your process is brought back in and is allowed to run. It depends on the time of the CPU. In general you should sleep at least for that amount of time.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19365</id>
		<title>Operating Systems 2014F Lecture 10</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_10&amp;diff=19365"/>
		<updated>2014-10-08T12:55:57Z</updated>

		<summary type="html">&lt;p&gt;Afry: Created page with &amp;quot;By using base 16 vs. base 10 - yes the alphabet is larger. when you talk about hex vs. decimal, you are changing how we read it. There&amp;#039;s something more correct about using hex...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;By using base 16 vs. base 10 - yes the alphabet is larger. when you talk about hex vs. decimal, you are changing how we read it. There&#039;s something more correct about using hex vs. base 10. A Hexadecimal digit represents 4 bits. How many bits does base 10 represent? (somewhere between 8 and 16 - a fractional bit) it&#039;s messy. Decimal is not the native base of a computer. Base 16 maps perfectly to base 2. That&#039;s why you also see octal - 3 bits vs. 4 bits. (but a reduced range), when you use hexadecimal it represents 4 bits, and is a bit cleaner. &lt;br /&gt;
&lt;br /&gt;
When you say offset is a power of two, the page size is also a power of two. the storage used to encode the page size determines the range of the offset. It is a specific byte in a page. What is the range of bytes that can go in a page. Classic x86&lt;br /&gt;
&lt;br /&gt;
what is the range of offsets of a 4 k page? The offsets are 0 - 4095. Another thing that you will need for the assignment, it&#039;s a question that refers to Peterson&#039;s algorithm. It doesn&#039;t work on modern machines. The big idea is the memory hierarchy. Key assumption: when one processor writes to memory the other processors will see that write, that value. Is that true in modern systems? No. &lt;br /&gt;
&lt;br /&gt;
Concurrency Mechanisms:&lt;br /&gt;
&lt;br /&gt;
locks&lt;br /&gt;
condition variables&lt;br /&gt;
Semaphores&lt;br /&gt;
&lt;br /&gt;
you build all of these using the same mechanisms. Let&#039;s you check a value and change it in one atomic operation. The basic functionality of the hardware. The semantics are different in these higher level constructs. When do you use what? What are the purpose of these things? there&#039;s huge amounts of overlap. what you are talking about with these things, is that you have some sort of storage that has atomic operations so you can check it&#039;s value adn change it, all at one time (so you don&#039;t have race conditions when you manipulate the value) then you have a queue of threads.&lt;br /&gt;
&lt;br /&gt;
The difference between these, is related to how you deal with the interaction between the storage and the queue of threads. &lt;br /&gt;
IF you have a lock, grab lock, and release the lock, does releasing the lock always succeed? Yes, if you have the lock, and you release it, it always succeeds. grab lock can block. when we talk about blocking, it&#039;s going to sit there and wait for the condition to happen, nothing is going to happen to that thread until that condition is met. You often have a try lock, try lock means check to see if the lock is grabbable or not. it will grab the lock if it&#039;s available, but if it&#039;s not available it will return, it&#039;s non-blocking, because maybe you don&#039;t want to sit there and block. Most of the time you will try to grab the lock and if the lock fails, it will wait until grabbing the lock succeeds.&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_2&amp;diff=19353</id>
		<title>Operating Systems 2014F Lecture 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_2&amp;diff=19353"/>
		<updated>2014-10-06T17:55:47Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Audio from the lecture given on September 10, 2014 [http://homeostasis.scs.carleton.ca/~soma/os-2014f/lectures/comp3000-2014f-lec02-10Sep2014.mp3 is now available].&lt;br /&gt;
&lt;br /&gt;
{{{&lt;br /&gt;
machine state &lt;br /&gt;
program counter &lt;br /&gt;
process states &lt;br /&gt;
paging / swapping &lt;br /&gt;
process &lt;br /&gt;
running program&lt;br /&gt;
virtualizing&lt;br /&gt;
time/space sharing&lt;br /&gt;
mechanisms &lt;br /&gt;
policy&lt;br /&gt;
}}}&lt;br /&gt;
&lt;br /&gt;
Chapter 4 in the book:&lt;br /&gt;
&lt;br /&gt;
processes - key abstraction in a modern operating system&lt;br /&gt;
&lt;br /&gt;
Sitting all day kills you - seriously reduces your life expectancy. Working out, doesn&#039;t necessarily make up for sitting all day. If you walk around for 5 minutes every hour. Anyone use typing break programs? Occupational hazard of the career path you have chosen is that you sit in front of the computer typing. Anil started typing dvorak early in order to avoid repetitive strain injuries. &lt;br /&gt;
&lt;br /&gt;
[http://www.lcdf.org/xwrits/ xwrits] link to save your wrists on a *nix machine gives you hand gestures to say it&#039;s time to get up. you can get it to insult you with hand gestures. It does hand gesture depending on culture. In order to tell you to take a break.&lt;br /&gt;
&lt;br /&gt;
When typing it is important to take breaks.&lt;br /&gt;
&lt;br /&gt;
You need to distinguish between programs and processes. A program is inprecise in the context of O/S. &lt;br /&gt;
&lt;br /&gt;
Program is precise in context of operating systems - Your web browser is that a program?&lt;br /&gt;
Web browser is a lot of little programs, but makes up one big program. It is not a precise thing. What is precise is an executable. An executable is a file on disk that can be exec&#039;d. (Disks are no longer disks they are all kinds of things such as filestate, etc) This is the unix version of the statement. There is a system call - called execve - takes as one of it&#039;s parameters a file and that file is then loaded into a process obliterating whatever else was in the process. &lt;br /&gt;
&lt;br /&gt;
Code can take many forms in a computer systems, this is just one form of data.&lt;br /&gt;
&lt;br /&gt;
For example, you ahve a text file that has a javascript / perl program / something, that is a program, it is also a text document, the operating system kernel does not really recognize it as an executable. You cannot give it as an argument to the execve system call. It has to run it indirectly, it has to find another exec executable to run that code. You have executables and you have processes.&lt;br /&gt;
&lt;br /&gt;
A process - is an executable that has been executed - loaded into memory and started running. A process you should think of as an abstraction of a computer that can only run one program at a time. (Older personal computers, early 1960&#039;s or something, there is no abstraction of a process. There is no notion of running more than one program at a time. Logically speaking: when you wanted to run a program, all of memory would be loaded with that program, when you wanted to quit the program, you cut the power (turn the computer off).) They run one program at a time, you load it off the disk, and it has complete control of the machine. A process is the abstraction you get when you say, we don&#039;t want every program to have complete control of the computer because I do not want to have to reboot the computer to switch programs. I want to run different programs concurrently, for multiple reasons. Want to chain multiple programs in order to produce a result. (A Unix pipeline) The process - giving each running program (each executable) it&#039;s own virtual computer to run. &lt;br /&gt;
&lt;br /&gt;
Virtualizing / virtualization (term is rather overloaded) What am I talking about when I say virtual? Something that isn&#039;t real. It&#039;s not a real thing. When people talk about virtual reality, they are talking about something that can be experienced. What we are saying in a computer science context: When we say virtual, we are really talking about an abstraction - What we actually have, the real thing is not good enough, it doesn&#039;t have qualities that you want, so you want to transform it into something more useful (in some way). When we talk about a virtual machine, we are talking about a machine (computer) that does not exist, in the sense that it is not embodied in actual hardware. &lt;br /&gt;
&lt;br /&gt;
(from the theoretical side of computer science): All programming languages or programming system to a first approximation are equivalent, a system is known as Turing complete it can run anything. Turning one Turing complete system into another Turing complete system is the process of virtualization. The ones you&#039;ve often heard of are: Language Based Virtual machine - an example: java virtual machine. Really you could talk about any time you run a higher level language (perl, javascript, python, etc) That code does not run directly on the processor. It runs inside of another program which has some kind of virtual machine. Strictly speaking, a lot of languages can be interpreted, which means that you have a program that goes through line by line and figures out what that line is supposed to do and what the next instruction is. The point is that no modern language operates that way. What they all go through is some sort of translation phase, converts it to some binary code, and then it runs the byte code. That runtime is what&#039;s called a virtual machine. But virtual machines are everywhere when we are talking about trying to run programs. Operating systems can be thought of as implementing a virtual machine and that virtual machine it implements is the process. Key difference between a virtual machine that makes processes and the vm that is typically in these language based virtual machines. The difference between these is getting smaller. Any idea what this difference is?&lt;br /&gt;
&lt;br /&gt;
Java based Virtual Machine - executes byte codes.&lt;br /&gt;
hardware can&#039;t interpret byte code &lt;br /&gt;
&lt;br /&gt;
What is the nature of the binary format that is being run in an operating system process? What format is that code? - machine code - it&#039;s the code that is understood by the processor. Machine code here, byte code here, what&#039;s the difference? The hardware can&#039;t interpret this, This language the processor needs another program to translate. Why can&#039;t the processor understand java byte code? It could, there are chips that run java byte code natively. What&#039;s worse, the machine code that your processor understand? it actually doesn&#039;t. Modern processors such as x86 or x86-64 these are the most common things for a pc, arm byte code, arm machine language, that sort of thing. &lt;br /&gt;
&lt;br /&gt;
This language is too annoying to use internally inside the microprocessor, it&#039;s not efficient, it was not designed to run very fast. It actually has a front end that takes that code and translates it to another byte code. There have been processor startups, where instead of having it done directly on the chip, they put something like a java virtual machine on the processor. Why am I saying this? The virtual and the real in computer science are often hard to tell apart. Virtual to one group could actually be real to another group. When you are coding in java / c, that is the language, that is real to you. That is the abstraction you are working i. but there are actually other levels below you. That generally is not the real level. when you are dealing with millions of transistors, there is a lot of abstraction. The process is the virtual machine that you run processes in. You take a file and loads it to disk. there is a little problem with this concept: Program on disk, is it a one to one mapping between programs on disk, and programs in memory? Not at all! Most programs on disk are not running at any given time. A given program on disk can be running in many different processes, you can have multiple instances of the same program running at the same time. Logically in an API for an operating system, you have to distinguish between the creation of a process, and the loading of an executable into that process. because you want to be able to facilitate the many to many mapping. What does that API look like? I&#039;ll give you another funny thing, if you are running one program, can you make that program do multiple things at the same time? Yes, now there is this whole notion of threading, but a thread is not a process.&lt;br /&gt;
&lt;br /&gt;
==A thread is not a process.==&lt;br /&gt;
&lt;br /&gt;
Process = thread(s) + address space&lt;br /&gt;
&lt;br /&gt;
The cPU is virtual, because that would be really annoying to only have 4 things running at a time. Talking about things that are running at the same time. how many things are running @ the same time? Hyperthreading. What does it mean to run things at the same time, is it actually running? or is it logically running. we want the abstraction so that every process has it&#039;s own computer. That&#039;s what we are talking about when we talk about threads. Process has memory in which it can run, inside that process, how many processors do I have running? Classic only has 1. When you get to multithreaded processors, there is more than one logical solution. If you think about this? That&#039;s a mess! Having more than one program counter to track inside of one address space. this causes lots of problems! What happens when they step on each other. They do crazy things like chance teh loop index, from outside the loop. how do you reason about your code when things like this happen? When you put more than one logical cpu inside an address space that can happen. For a long time Operating systems, only supported processes with a single thread of execution. They supported lots of processes. Just made sure each one only had 1 thread.&lt;br /&gt;
&lt;br /&gt;
Having more than one cpu running around inside the address space. What happens if they try running the same code twice? Change the loop index in the loop from outside of the loop. The right way to think of this is don&#039;t do that. When you put more than one cpu inside the address space. For a long time operating system supported one cpu inside the address space. They supported lots of processes but only one cpu. That&#039;s kind of limiting, but why do you want your running programs to be sharing memory, the main reason you want them to share memory is to communicate. Shared memory has only one advantage - it can be very fast. How do you make sure you don&#039;t overwrite each other&#039;s messages? In modern computation, distributed systems, big systems - you do almost everything you can - when you share memory you put some sort of API on top of it to control access. The only problem is - other than having potential communication overhead, you have higher overhead in general, you now have an address space to keep track of, it&#039;s own version of memory for each running program. That&#039;s so much overhead, it was a good while before computers started to allow multithreading, because it takes a lot of transistors to do. Used to be you&#039;d have a Completely separate processor (a MMU) to take care of every running program in it&#039;s own address space, it is now integrated into cpus. &lt;br /&gt;
&lt;br /&gt;
A running program is a process, address space, plus one or more threads. That&#039;s the virtual machine in which you are running in. It&#039;s got machine code.  &lt;br /&gt;
&lt;br /&gt;
Did you see any assembly in 2401? Not really.&lt;br /&gt;
&lt;br /&gt;
=Sharing=&lt;br /&gt;
&lt;br /&gt;
Explain terminology. Time space sharing, when we talk about virtualizing resources, talking about virtualizing cpu, virtualizing ram, what we are actually talking about is sharing. Like on a playground, we need to place nicely together.&lt;br /&gt;
An operating system is a set of mechanisms and policies for allowing for time and space sharing of the (computer resources) processor. In Time sharing: (taking turns) The processor is a limited resource. one program gets it for a while, another gets it for another while, then another gets it for a while.  Space sharing means that you have all this RAM, split it up, you have this disk, split it up, one program gets part of it, and another program gets part of it. That&#039;s what we mean by space sharing. &lt;br /&gt;
&lt;br /&gt;
=Virtual memory and physical memory=&lt;br /&gt;
&lt;br /&gt;
[[File:Virtualmemory.png]]&lt;br /&gt;
&lt;br /&gt;
There&#039;s this distinction between virtual memory and physical memory. Physical memory, is the RAM your computer actually has. You buy those little chips, and plug them in (voila, it goes faster)! (Sims &amp;amp; DIMMS, you expand the RAM) you see them at startup, that&#039;s physical memory, it&#039;s a real thing, you get gigs of it now. Virtual memory is the memory each running program thinks it has. Memory is shared, we want to share it between multiple processes. A wierd consequence of this is how do we refer to memory, with addresses, with 4 gigs of ram, you get 4 million memory locations. 2^32 locations. That&#039;s a number of addresses, one way you could say to share a program running in memory. the first program you run, it will get address ranges 200 - 400, the second 500 - 800. when you load any program binary, you have to then change it to use the addresses it is supposed to use. What address range is it supposed to be? Where is it going to get? What if your program wants more RAM? It&#039;s a bit of a pain. It&#039;s annoying. What we actually have is physical addresses and virtual addresses: When you load a process, all of the memory references are referencing virtual addresses. Unless you have special hardware to accellerate it, you have to do some sort of table lookup. This is happening on every memory access. There&#039;s a lot of hardware, there&#039;s a lot of operating system mechanism to make this run pretty damn fast. That&#039;s what we mean, every process can have an address 2000, but the address 2000 is the virtual address 2000, each is mapped to a different physical address. (it&#039;s OVER 9000)&lt;br /&gt;
&lt;br /&gt;
It can be as big as you want it to be. When you talk about 64 bit processors vs. 32 bit processors. It&#039;s not the difference in how much physical memory you have, it&#039;s the virtual memory. It is much bigger. Do we have any computer with 2^64 bytes of RAM? No, it is a really big number. Not in our lifetime. Everyone has their own private address space. It&#039;s just like namespaces, when you write program, and run another program. Do you expect the x in one program to be the same in another? No. the scoping is between processes. These address spaces are the same thing, but they are mapped differently. The address space is fixed in size, because it&#039;s limited by the processor word size. How much of it is allocated to the program? Might have a program that&#039;s taking up 50 Mb, but I want some more memory, asking for more address space? Asking for part of your virtual address space to be turned into storage, ask the operating system, please can I have some more ram? (Can I have some more sir?)&lt;br /&gt;
&lt;br /&gt;
The operating system notices when you access parts of physical memory that you are not allowed to. It raises an exception, handles it, and you get a segmentation fault. Accessing memory you shouldn&#039;t be accessing. The operating system is very firm about it, if you don&#039;t handle it in your process, you die.&lt;br /&gt;
In modern operating systems, some programs are not equal. Just because you are root, does not mean you are in control! It si not the root user, it is the kernel? Can the kernel have a segmentation fault? A program running with full privileges can still have a segmentation fault. The kernel can also have segmentation fault. When this happens the machine will crash hard. &lt;br /&gt;
&lt;br /&gt;
[http://slacksite.com/slackware/oops.html For Oops vs. Panics]&lt;br /&gt;
&lt;br /&gt;
On the linux system, an oops is a kernel message system being logged. when the kernel fully panics, it just stops. Kernel panic in linux is rare, blue screen of death in windows is a kernel panic. It is possible the system recovers, deep enough in the kernel if you make mistakes, you are just done. you aren&#039;t just managing your own memory, you are managing on behalf of other processes. If a kernel is corrupt, it will corrupt everything on disk. Which would you rather have the kernel stop, or continue to do uncontrollable things to other parts of the system?&lt;br /&gt;
&lt;br /&gt;
Environment variables for X: &lt;br /&gt;
&lt;br /&gt;
Did you see the command for listing all the environment variables? env - when you ran env on it, when you look through that list of environment variables. There&#039;s an environment variables called display. Looking at systems going, what the heck is that? Looking really far, another is just to look at teh environment variables, which programs require this. Look at the environment variables, which one could it be? Google for Display environment variable, and you see all kinds of stuff. one of the skills you need to have, you need to be able to figure this out! The longer story is, the way you see things in a unix system, is that there is a program controlling the display. There is a part of the kernel that controls their access to the display. Wayland, what does ubuntu use? Next generation replacements for the xwindow system. before the xwindow system, there was the wwindow system.to allow you to separate the notion of a program to interact with the output, it was designed to be network separate. It&#039;s actually for display. It&#039;s the computer you are running on. That&#039;s the server, because all the programs you are running, on it. the program running the process sending all the graphical commands could be somewhere else completely. Xwindow system was based on the xnetwork protocol. processes can be arbitrarily separate from their displays. how does it know where to display things? Provides network transparency.&lt;br /&gt;
&lt;br /&gt;
New view - direct graphical output from another computer on the network - bad because of latency&lt;br /&gt;
&lt;br /&gt;
How to get around lag - run more of the code on the client instead of the server. Have the xclients have some code - transfer code to the xserver, to run on the server. Invisible website, downloads the page and it runs in your browser. Same thing different technology stack.&lt;br /&gt;
&lt;br /&gt;
Mechanisms vs. Policy - &lt;br /&gt;
&lt;br /&gt;
mechanisms - things to do things - the knobs that let us manipulate program state - should be maximally flexible so that they can implement whatever policies you want to do. &lt;br /&gt;
&lt;br /&gt;
policy are what you should do &lt;br /&gt;
&lt;br /&gt;
X Server &amp;lt;= mechanism&lt;br /&gt;
&lt;br /&gt;
window manager, toolkit &amp;lt;= policy&lt;br /&gt;
&lt;br /&gt;
Windows - two calls - create process () &amp;lt;--- many different parameters&lt;br /&gt;
&lt;br /&gt;
unix - fork() and execve(file, cmdline, env)&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_2&amp;diff=19352</id>
		<title>Operating Systems 2014F Lecture 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_2&amp;diff=19352"/>
		<updated>2014-10-06T17:53:26Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Audio from the lecture given on September 10, 2014 [http://homeostasis.scs.carleton.ca/~soma/os-2014f/lectures/comp3000-2014f-lec02-10Sep2014.mp3 is now available].&lt;br /&gt;
&lt;br /&gt;
{{{&lt;br /&gt;
machine state &lt;br /&gt;
program counter &lt;br /&gt;
process states &lt;br /&gt;
paging / swapping &lt;br /&gt;
process &lt;br /&gt;
running program&lt;br /&gt;
virtualizing&lt;br /&gt;
time/space sharing&lt;br /&gt;
mechanisms &lt;br /&gt;
policy&lt;br /&gt;
}}}&lt;br /&gt;
&lt;br /&gt;
Chapter 4 in the book:&lt;br /&gt;
&lt;br /&gt;
processes - key abstraction in a modern operating system&lt;br /&gt;
&lt;br /&gt;
Sitting all day kills you - seriously reduces your life expectancy. Working out, doesn&#039;t necessarily make up for sitting all day. If you walk around for 5 minutes every hour. Anyone use typing break programs? Occupational hazard of the career path you have chosen is that you sit in front of the computer typing. Anil started typing dvorak early in order to avoid repetitive strain injuries. &lt;br /&gt;
&lt;br /&gt;
[http://www.lcdf.org/xwrits/ xwrits] link to save your wrists on a *nix machine gives you hand gestures to say it&#039;s time to get up. you can get it to insult you with hand gestures. It does hand gesture depending on culture. In order to tell you to take a break.&lt;br /&gt;
&lt;br /&gt;
When typing it is important to take breaks.&lt;br /&gt;
&lt;br /&gt;
You need to distinguish between programs and processes. A program is inprecise in the context of O/S. &lt;br /&gt;
&lt;br /&gt;
Program is precise in context of operating systems - Your web browser is that a program?&lt;br /&gt;
Web browser is a lot of little programs, but makes up one big program. It is not a precise thing. What is precise is an executable. An executable is a file on disk that can be exec&#039;d. (Disks are no longer disks they are all kinds of things such as filestate, etc) This is the unix version of the statement. There is a system call - called execve - takes as one of it&#039;s parameters a file and that file is then loaded into a process obliterating whatever else was in the process. &lt;br /&gt;
&lt;br /&gt;
Code can take many forms in a computer systems, this is just one form of data.&lt;br /&gt;
&lt;br /&gt;
For example, you ahve a text file that has a javascript / perl program / something, that is a program, it is also a text document, the operating system kernel does not really recognize it as an executable. You cannot give it as an argument to the execve system call. It has to run it indirectly, it has to find another exec executable to run that code. You have executables and you have processes.&lt;br /&gt;
&lt;br /&gt;
A process - is an executable that has been executed - loaded into memory and started running. A process you should think of as an abstraction of a computer that can only run one program at a time. (Older personal computers, early 1960&#039;s or something, there is no abstraction of a process. There is no notion of running more than one program at a time. Logically speaking: when you wanted to run a program, all of memory would be loaded with that program, when you wanted to quit the program, you cut the power (turn the computer off).) They run one program at a time, you load it off the disk, and it has complete control of the machine. A process is the abstraction you get when you say, we don&#039;t want every program to have complete control of the computer because I do not want to have to reboot the computer to switch programs. I want to run different programs concurrently, for multiple reasons. Want to chain multiple programs in order to produce a result. (A Unix pipeline) The process - giving each running program (each executable) it&#039;s own virtual computer to run. &lt;br /&gt;
&lt;br /&gt;
Virtualizing / virtualization (term is rather overloaded) What am I talking about when I say virtual? Something that isn&#039;t real. It&#039;s not a real thing. When people talk about virtual reality, they are talking about something that can be experienced. What we are saying in a computer science context: When we say virtual, we are really talking about an abstraction - What we actually have, the real thing is not good enough, it doesn&#039;t have qualities that you want, so you want to transform it into something more useful (in some way). When we talk about a virtual machine, we are talking about a machine (computer) that does not exist, in the sense that it is not embodied in actual hardware. &lt;br /&gt;
&lt;br /&gt;
(from the theoretical side of computer science): All programming languages or programming system to a first approximation are equivalent, a system is known as Turing complete it can run anything. Turning one Turing complete system into another Turing complete system is the process of virtualization. The ones you&#039;ve often heard of are: Language Based Virtual machine - an example: java virtual machine. Really you could talk about any time you run a higher level language (perl, javascript, python, etc) That code does not run directly on the processor. It runs inside of another program which has some kind of virtual machine. Strictly speaking, a lot of languages can be interpreted, which means that you have a program that goes through line by line and figures out what that line is supposed to do and what the next instruction is. The point is that no modern language operates that way. What they all go through is some sort of translation phase, converts it to some binary code, and then it runs the byte code. That runtime is what&#039;s called a virtual machine. But virtual machines are everywhere when we are talking about trying to run programs. Operating systems can be thought of as implementing a virtual machine and that virtual machine it implements is the process. Key difference between a virtual machine that makes processes and the vm that is typically in these language based virtual machines. The difference between these is getting smaller. Any idea what this difference is?&lt;br /&gt;
&lt;br /&gt;
Java based Virtual Machine - executes byte codes.&lt;br /&gt;
hardware can&#039;t interpret byte code &lt;br /&gt;
&lt;br /&gt;
What is the nature of the binary format that is being run in an operating system process? What format is that code? - machine code - it&#039;s the code that is understood by the processor. Machine code here, byte code here, what&#039;s the difference? The hardware can&#039;t interpret this, This language the processor needs another program to translate. Why can&#039;t the processor understand java byte code? It could, there are chips that run java byte code natively. What&#039;s worse, the machine code that your processor understand? it actually doesn&#039;t. Modern processors such as x86 or x86-64 these are the most common things for a pc, arm byte code, arm machine language, that sort of thing. &lt;br /&gt;
&lt;br /&gt;
This language is too annoying to use internally inside the microprocessor, it&#039;s not efficient, it was not designed to run very fast. It actually has a front end that takes that code and translates it to another byte code. There have been processor startups, where instead of having it done directly on the chip, they put something like a java virtual machine on the processor. Why am I saying this? The virtual and the real in computer science are often hard to tell apart. Virtual to one group could actually be real to another group. When you are coding in java / c, that is the language, that is real to you. That is the abstraction you are working i. but there are actually other levels below you. That generally is not the real level. when you are dealing with millions of transistors, there is a lot of abstraction. The process is the virtual machine that you run processes in. You take a file and loads it to disk. there is a little problem with this concept: Program on disk, is it a one to one mapping between programs on disk, and programs in memory? Not at all! Most programs on disk are not running at any given time. A given program on disk can be running in many different processes, you can have multiple instances of the same program running at the same time. Logically in an API for an operating system, you have to distinguish between the creation of a process, and the loading of an executable into that process. because you want to be able to facilitate the many to many mapping. What does that API look like? I&#039;ll give you another funny thing, if you are running one program, can you make that program do multiple things at the same time? Yes, now there is this whole notion of threading, but a thread is not a process.&lt;br /&gt;
&lt;br /&gt;
==A thread is not a process.==&lt;br /&gt;
&lt;br /&gt;
Process = thread(s) + address space&lt;br /&gt;
&lt;br /&gt;
The cPU is virtual, because that would be really annoying to only have 4 things running at a time. Talking about things that are running at the same time. how many things are running @ the same time? Hyperthreading. What does it mean to run things at the same time, is it actually running? or is it logically running. we want the abstraction so that every process has it&#039;s own computer. That&#039;s what we are talking about when we talk about threads. Process has memory in which it can run, inside that process, how many processors do I have running? Classic only has 1. When you get to multithreaded processors, there is more than one logical solution. If you think about this? That&#039;s a mess! Having more than one program counter to track inside of one address space. this causes lots of problems! What happens when they step on each other. They do crazy things like chance teh loop index, from outside the loop. how do you reason about your code when things like this happen? When you put more than one logical cpu inside an address space that can happen. For a long time Operating systems, only supported processes with a single thread of execution. They supported lots of processes. Just made sure each one only had 1 thread.&lt;br /&gt;
&lt;br /&gt;
Having more than one cpu running around inside the address space. What happens if they try running the same code twice? Change the loop index in the loop from outside of the loop. The right way to think of this is don&#039;t do that. When you put more than one cpu inside the address space. For a long time operating system supported one cpu inside the address space. They supported lots of processes but only one cpu. That&#039;s kind of limiting, but why do you want your running programs to be sharing memory, the main reason you want them to share memory is to communicate. Shared memory has only one advantage - it can be very fast. How do you make sure you don&#039;t overwrite each other&#039;s messages? In modern computation, distributed systems, big systems - you do almost everything you can - when you share memory you put some sort of API on top of it to control access. The only problem is - other than having potential communication overhead, you have higher overhead in general, you now have an address space to keep track of, it&#039;s own version of memory for each running program. That&#039;s so much overhead, it was a good while before computers started to allow multithreading, because it takes a lot of transistors to do. Used to be you&#039;d have a Completely separate processor (a MMU) to take care of every running program in it&#039;s own address space, it is now integrated into cpus. &lt;br /&gt;
&lt;br /&gt;
A running program is a process, address space, plus one or more threads. That&#039;s the virtual machine in which you are running in. It&#039;s got machine code.  &lt;br /&gt;
&lt;br /&gt;
Did you see any assembly in 2401? Not really.&lt;br /&gt;
&lt;br /&gt;
=Sharing=&lt;br /&gt;
&lt;br /&gt;
Explain terminology. Time space sharing, when we talk about virtualizing resources, talking about virtualizing cpu, virtualizing ram, what we are actually talking about is sharing. Like on a playground, we need to place nicely together.&lt;br /&gt;
An operating system is a set of mechanisms and policies for allowing for time and space sharing of the (computer resources) processor. In Time sharing: (taking turns) The processor is a limited resource. one program gets it for a while, another gets it for another while, then another gets it for a while.  Space sharing means that you have all this RAM, split it up, you have this disk, split it up, one program gets part of it, and another program gets part of it. That&#039;s what we mean by space sharing. &lt;br /&gt;
&lt;br /&gt;
=Virtual memory and physical memory=&lt;br /&gt;
&lt;br /&gt;
[[File:Virtualmemory.png]]&lt;br /&gt;
&lt;br /&gt;
There&#039;s this distinction between virtual memory and physical memory. Physical memory, is the RAM your computer actually has. You buy those little chips, and plug them in (voila, it goes faster)! (Sims &amp;amp; DIMMS, you expand the RAM) you see them at startup, that&#039;s physical memory, it&#039;s a real thing, you get gigs of it now. Virtual memory is the memory each running program thinks it has. Memory is shared, we want to share it between multiple processes. A wierd consequence of this is how do we refer to memory, with addresses, with 4 gigs of ram, you get 4 million memory locations. 2^32 locations. That&#039;s a number of addresses, one way you could say to share a program running in memory. the first program you run, it will get address ranges 200 - 400, the second 500 - 800. when you load any program binary, you have to then change it to use the addresses it is supposed to use. What address range is it supposed to be? Where is it going to get? What if your program wants more RAM? It&#039;s a bit of a pain. It&#039;s annoying. What we actually have is physical addresses and virtual addresses: When you load a process, all of the memory references are referencing virtual addresses. Unless you have special hardware to accellerate it, you have to do some sort of table lookup. This is happening on every memory access. There&#039;s a lot of hardware, there&#039;s a lot of operating system mechanism to make this run pretty damn fast. That&#039;s what we mean, every process can have an address 2000, but the address 2000 is the virtual address 2000, each is mapped to a different physical address. (it&#039;s OVER 9000)&lt;br /&gt;
&lt;br /&gt;
It can be as big as you want it to be. When you talk about 64 bit processors vs. 32 bit processors. It&#039;s not the difference in how much physical memory you have, it&#039;s the virtual memory. It is much bigger. Do we have any computer with 2^64 bytes of RAM? No, it is a really big number. Not in our lifetime. Everyone has their own private address space. It&#039;s just like namespaces, when you write program, and run another program. Do you expect the x in one program to be the same in another? No. the scoping is between processes. These address spaces are the same thing, but they are mapped differently. The address space is fixed in size, because it&#039;s limited by the processor word size. How much of it is allocated to the program? Might have a program that&#039;s taking up 50 Mb, but I want some more memory, asking for more address space? Asking for part of your virtual address space to be turned into storage, ask the operating system, please can I have some more ram? (Can I have some more sir?)&lt;br /&gt;
&lt;br /&gt;
The operating system notices when you access parts of physical memory that you are not allowed to. It raises an exception, handles it, and you get a segmentation fault. Accessing memory you shouldn&#039;t be accessing. The operating system is very firm about it, if you don&#039;t handle it in your process, you die.&lt;br /&gt;
In modern operating systems, some programs are not equal. Just because you are root, does not mean you are in control! It si not the root user, it is the kernel? Can the kernel have a segmentation fault? A program running with full privileges can still have a segmentation fault. The kernel can also have segmentation fault. When this happens the machine will crash hard. &lt;br /&gt;
&lt;br /&gt;
[http://slacksite.com/slackware/oops.html For Oops vs. Panics]&lt;br /&gt;
&lt;br /&gt;
On the linux system, an oops is a kernel message system being logged. when the kernel fully panics, it just stops. Kernel panic in linux is rare, blue screen of death in windows is a kernel panic. It is possible the system recovers, deep enough in the kernel if you make mistakes, you are just done. you aren&#039;t just managing your own memory, you are managing on behalf of other processes. If a kernel is corrupt, it will corrupt everything on disk. Which would you rather have the kernel stop, or continue to do uncontrollable things to other parts of the system?&lt;br /&gt;
&lt;br /&gt;
Environment variables for X: &lt;br /&gt;
&lt;br /&gt;
Did you see the command for listing all the environment variables? env - when you ran env on it, when you look through that list of environment variables. There&#039;s an environment variables called display. Looking at systems going, what the heck is that? Looking really far, another is just to look at teh environment variables, which programs require this. Look at the environment variables, which one could it be? Google for Display environment variable, and you see all kinds of stuff. one of the skills you need to have, you need to be able to figure this out! The longer story is, the way you see things in a unix system, is that there is a program controlling the display. There is a part of the kernel that controls their access to the display. Wayland, what does ubuntu use? Next generation replacements for the xwindow system. before the xwindow system, there was the wwindow system.to allow you to separate the notion of a program to interact with the output, it was designed to be network separate. It&#039;s actually for display. It&#039;s the computer you are running on. That&#039;s the server, because all the programs you are running, on it. the program running the process sending all the graphical commands could be somewhere else completely. Xwindow system was based on the xnetwork protocol. processes can be arbitrarily separate from their displays. how does it know where to display things?&lt;br /&gt;
&lt;br /&gt;
new view - direct graphical output from another computer on the network - bad because of latency - new &lt;br /&gt;
&lt;br /&gt;
How to get around lag - run more of the code on the client instead of the server. Have the xclients have some code - transfer code to the xserver, to run on the server. Invisible website, downloads the page and it runs in your browser. Same thing different technology stack.&lt;br /&gt;
&lt;br /&gt;
Mechanisms vs. Policy - &lt;br /&gt;
&lt;br /&gt;
mechanisms - things to do things - the knobs that let us manipulate program state - should be maximally flexible so that they can implement whatever policies you want to do. &lt;br /&gt;
&lt;br /&gt;
policy are what you should do &lt;br /&gt;
&lt;br /&gt;
X Server &amp;lt;= mechanism&lt;br /&gt;
&lt;br /&gt;
window manager, toolkit &amp;lt;= policy&lt;br /&gt;
&lt;br /&gt;
Windows - two calls - create process () &amp;lt;--- many different parameters&lt;br /&gt;
&lt;br /&gt;
unix - fork() and execve(file, cmdline, env)&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_2&amp;diff=19351</id>
		<title>Operating Systems 2014F Lecture 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_2&amp;diff=19351"/>
		<updated>2014-10-06T17:46:12Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Audio from the lecture given on September 10, 2014 [http://homeostasis.scs.carleton.ca/~soma/os-2014f/lectures/comp3000-2014f-lec02-10Sep2014.mp3 is now available].&lt;br /&gt;
&lt;br /&gt;
{{{&lt;br /&gt;
machine state &lt;br /&gt;
program counter &lt;br /&gt;
process states &lt;br /&gt;
paging / swapping &lt;br /&gt;
process &lt;br /&gt;
running program&lt;br /&gt;
virtualizing&lt;br /&gt;
time/space sharing&lt;br /&gt;
mechanisms &lt;br /&gt;
policy&lt;br /&gt;
}}}&lt;br /&gt;
&lt;br /&gt;
Chapter 4 in the book:&lt;br /&gt;
&lt;br /&gt;
processes - key abstraction in a modern operating system&lt;br /&gt;
&lt;br /&gt;
Sitting all day kills you - seriously reduces your life expectancy. Working out, doesn&#039;t necessarily make up for sitting all day. If you walk around for 5 minutes every hour. Anyone use typing break programs? Occupational hazard of the career path you have chosen is that you sit in front of the computer typing. Anil started typing dvorak early in order to avoid repetitive strain injuries. &lt;br /&gt;
&lt;br /&gt;
[http://www.lcdf.org/xwrits/ xwrits] link to save your wrists on a *nix machine gives you hand gestures to say it&#039;s time to get up. you can get it to insult you with hand gestures. It does hand gesture depending on culture. In order to tell you to take a break.&lt;br /&gt;
&lt;br /&gt;
When typing it is important to take breaks.&lt;br /&gt;
&lt;br /&gt;
You need to distinguish between programs and processes. A program is inprecise in the context of O/S. &lt;br /&gt;
&lt;br /&gt;
Program is precise in context of operating systems - Your web browser is that a program?&lt;br /&gt;
Web browser is a lot of little programs, but makes up one big program. It is not a precise thing. What is precise is an executable. An executable is a file on disk that can be exec&#039;d. (Disks are no longer disks they are all kinds of things such as filestate, etc) This is the unix version of the statement. There is a system call - called execve - takes as one of it&#039;s parameters a file and that file is then loaded into a process obliterating whatever else was in the process. &lt;br /&gt;
&lt;br /&gt;
Code can take many forms in a computer systems, this is just one form of data.&lt;br /&gt;
&lt;br /&gt;
For example, you ahve a text file that has a javascript / perl program / something, that is a program, it is also a text document, the operating system kernel does not really recognize it as an executable. You cannot give it as an argument to the execve system call. It has to run it indirectly, it has to find another exec executable to run that code. You have executables and you have processes.&lt;br /&gt;
&lt;br /&gt;
A process - is an executable that has been executed - loaded into memory and started running. A process you should think of as an abstraction of a computer that can only run one program at a time. (Older personal computers, early 1960&#039;s or something, there is no abstraction of a process. There is no notion of running more than one program at a time. Logically speaking: when you wanted to run a program, all of memory would be loaded with that program, when you wanted to quit the program, you cut the power (turn the computer off).) They run one program at a time, you load it off the disk, and it has complete control of the machine. A process is the abstraction you get when you say, we don&#039;t want every program to have complete control of the computer because I do not want to have to reboot the computer to switch programs. I want to run different programs concurrently, for multiple reasons. Want to chain multiple programs in order to produce a result. (A Unix pipeline) The process - giving each running program (each executable) it&#039;s own virtual computer to run. &lt;br /&gt;
&lt;br /&gt;
Virtualizing / virtualization (term is rather overloaded) What am I talking about when I say virtual? Something that isn&#039;t real. It&#039;s not a real thing. When people talk about virtual reality, they are talking about something that can be experienced. What we are saying in a computer science context: When we say virtual, we are really talking about an abstraction - What we actually have, the real thing is not good enough, it doesn&#039;t have qualities that you want, so you want to transform it into something more useful (in some way). When we talk about a virtual machine, we are talking about a machine (computer) that does not exist, in the sense that it is not embodied in actual hardware. &lt;br /&gt;
&lt;br /&gt;
(from the theoretical side of computer science): All programming languages or programming system to a first approximation are equivalent, a system is known as Turing complete it can run anything. Turning one Turing complete system into another Turing complete system is the process of virtualization. The ones you&#039;ve often heard of are: Language Based Virtual machine - an example: java virtual machine. Really you could talk about any time you run a higher level language (perl, javascript, python, etc) That code does not run directly on the processor. It runs inside of another program which has some kind of virtual machine. Strictly speaking, a lot of languages can be interpreted, which means that you have a program that goes through line by line and figures out what that line is supposed to do and what the next instruction is. The point is that no modern language operates that way. What they all go through is some sort of translation phase, converts it to some binary code, and then it runs the byte code. That runtime is what&#039;s called a virtual machine. But virtual machines are everywhere when we are talking about trying to run programs. Operating systems can be thought of as implementing a virtual machine and that virtual machine it implements is the process. Key difference between a virtual machine that makes processes and the vm that is typically in these language based virtual machines. The difference between these is getting smaller. Any idea what this difference is?&lt;br /&gt;
&lt;br /&gt;
Java based Virtual Machine - executes byte codes.&lt;br /&gt;
hardware can&#039;t interpret byte code &lt;br /&gt;
&lt;br /&gt;
What is the nature of the binary format that is being run in an operating system process? What format is that code? - machine code - it&#039;s the code that is understood by the processor. Machine code here, byte code here, what&#039;s the difference? The hardware can&#039;t interpret this, This language the processor needs another program to translate. Why can&#039;t the processor understand java byte code? It could, there are chips that run java byte code natively. What&#039;s worse, the machine code that your processor understand? it actually doesn&#039;t. Modern processors such as x86 or x86-64 these are the most common things for a pc, arm byte code, arm machine language, that sort of thing. &lt;br /&gt;
&lt;br /&gt;
This language is too annoying to use internally inside the microprocessor, it&#039;s not efficient, it was not designed to run very fast. It actually has a front end that takes that code and translates it to another byte code. There have been processor startups, where instead of having it done directly on the chip, they put something like a java virtual machine on the processor. Why am I saying this? The virtual and the real in computer science are often hard to tell apart. Virtual to one group could actually be real to another group. When you are coding in java / c, that is the language, that is real to you. That is the abstraction you are working i. but there are actually other levels below you. That generally is not the real level. when you are dealing with millions of transistors, there is a lot of abstraction. The process is the virtual machine that you run processes in. You take a file and loads it to disk. there is a little problem with this concept: Program on disk, is it a one to one mapping between programs on disk, and programs in memory? Not at all! Most programs on disk are not running at any given time. A given program on disk can be running in many different processes, you can have multiple instances of the same program running at the same time. Logically in an API for an operating system, you have to distinguish between the creation of a process, and the loading of an executable into that process. because you want to be able to facilitate the many to many mapping. What does that API look like? I&#039;ll give you another funny thing, if you are running one program, can you make that program do multiple things at the same time? Yes, now there is this whole notion of threading, but a thread is not a process.&lt;br /&gt;
&lt;br /&gt;
==A thread is not a process.==&lt;br /&gt;
&lt;br /&gt;
Process = thread(s) + address space&lt;br /&gt;
&lt;br /&gt;
The cPU is virtual, because that would be really annoying to only have 4 things running at a time. Talking about things that are running at the same time. how many things are running @ the same time? Hyperthreading. What does it mean to run things at the same time, is it actually running? or is it logically running. we want the abstraction so that every process has it&#039;s own computer. That&#039;s what we are talking about when we talk about threads. Process has memory in which it can run, inside that process, how many processors do I have running? Classic only has 1. When you get to multithreaded processors, there is more than one logical solution. If you think about this? That&#039;s a mess! Having more than one program counter to track inside of one address space. this causes lots of problems! What happens when they step on each other. They do crazy things like chance teh loop index, from outside the loop. how do you reason about your code when things like this happen? When you put more than one logical cpu inside an address space that can happen. For a long time Operating systems, only supported processes with a single thread of execution. They supported lots of processes. Just made sure each one only had 1 thread.&lt;br /&gt;
&lt;br /&gt;
Having more than one cpu running around inside the address space. What happens if they try running the same code twice? Change the loop index in the loop from outside of the loop. The right way to think of this is don&#039;t do that. When you put more than one cpu inside the address space. For a long time operating system supported one cpu inside the address space. They supported lots of processes but only one cpu. That&#039;s kind of limiting, but why do you want your running programs to be sharing memory, the main reason you want them to share memory is to communicate. Shared memory has only one advantage - it can be very fast. How do you make sure you don&#039;t overwrite each other&#039;s messages? In modern computation, distributed systems, big systems - you do almost everything you can - when you share memory you put some sort of API on top of it to control access. The only problem is - other than having potential communication overhead, you have higher overhead in general, you now have an address space to keep track of, it&#039;s own version of memory for each running program. That&#039;s so much overhead, it was a good while before computers started to allow multithreading, because it takes a lot of transistors to do. Used to be you&#039;d have a Completely separate processor (a MMU) to take care of every running program in it&#039;s own address space, it is now integrated into cpus. &lt;br /&gt;
&lt;br /&gt;
A running program is a process, address space, plus one or more threads. That&#039;s the virtual machine in which you are running in. It&#039;s got machine code.  &lt;br /&gt;
&lt;br /&gt;
Did you see any assembly in 2401? Not really.&lt;br /&gt;
&lt;br /&gt;
=Sharing=&lt;br /&gt;
&lt;br /&gt;
Explain terminology. Time space sharing, when we talk about virtualizing resources, talking about virtualizing cpu, virtualizing ram, what we are actually talking about is sharing. Like on a playground, we need to place nicely together.&lt;br /&gt;
An operating system is a set of mechanisms and policies for allowing for time and space sharing of the (computer resources) processor. In Time sharing: (taking turns) The processor is a limited resource. one program gets it for a while, another gets it for another while, then another gets it for a while.  Space sharing means that you have all this RAM, split it up, you have this disk, split it up, one program gets part of it, and another program gets part of it. That&#039;s what we mean by space sharing. &lt;br /&gt;
&lt;br /&gt;
=Virtual memory and physical memory=&lt;br /&gt;
&lt;br /&gt;
[[File:Virtualmemory.png]]&lt;br /&gt;
&lt;br /&gt;
There&#039;s this distinction between virtual memory and physical memory. Physical memory, is the RAM your computer actually has. You buy those little chips, and plug them in (voila, it goes faster)! (Sims &amp;amp; DIMMS, you expand the RAM) you see them at startup, that&#039;s physical memory, it&#039;s a real thing, you get gigs of it now. Virtual memory is the memory each running program thinks it has. Memory is shared, we want to share it between multiple processes. A wierd consequence of this is how do we refer to memory, with addresses, with 4 gigs of ram, you get 4 million memory locations. 2^32 locations. That&#039;s a number of addresses, one way you could say to share a program running in memory. the first program you run, it will get address ranges 200 - 400, the second 500 - 800. when you load any program binary, you have to then change it to use the addresses it is supposed to use. What address range is it supposed to be? Where is it going to get? What if your program wants more RAM? It&#039;s a bit of a pain. It&#039;s annoying. What we actually have is physical addresses and virtual addresses: When you load a process, all of the memory references are referencing virtual addresses. Unless you have special hardware to accellerate it, you have to do some sort of table lookup. This is happening on every memory access. There&#039;s a lot of hardware, there&#039;s a lot of operating system mechanism to make this run pretty damn fast. That&#039;s what we mean, every process can have an address 2000, but the address 2000 is the virtual address 2000, each is mapped to a different physical address. (it&#039;s OVER 9000)&lt;br /&gt;
&lt;br /&gt;
It can be as big as you want it to be. When you talk about 64 bit processors vs. 32 bit processors. It&#039;s not the difference in how much physical memory you have, it&#039;s the virtual memory. It is much bigger. Do we have any computer with 2^64 bytes of RAM? No, it is a really big number. Not in our lifetime. Everyone has their own private address space. It&#039;s just like namespaces, when you write program, and run another program. Do you expect the x in one program to be the same in another? No. the scoping is between processes. These address spaces are the same thing, but they are mapped differently. The address space is fixed in size, because it&#039;s limited by the processor word size. How much of it is allocated to the program? Might have a program that&#039;s taking up 50 Mb, but I want some more memory, asking for more address space? Asking for part of your virtual address space to be turned into storage, ask the operating system, please can I have some more ram? (Can I have some more sir?)&lt;br /&gt;
&lt;br /&gt;
The operating system notices when you access parts of physical memory that you are not allowed to. It raises an exception, handles it, and you get a segmentation fault. Accessing memory you shouldn&#039;t be accessing. The operating system is very firm about it, if you don&#039;t handle it in your process, you die.&lt;br /&gt;
In modern operating systems, some programs are not equal. Just because you are root, does not mean you are in control! It si not the root user, it is the kernel? Can the kernel have a segmentation fault? A program running with full privileges can still have a segmentation fault. The kernel can also have segmentation fault. When this happens the machine will crash hard. &lt;br /&gt;
&lt;br /&gt;
[http://slacksite.com/slackware/oops.html For Oops vs. Panics]&lt;br /&gt;
&lt;br /&gt;
On the linux system, an oops is a kernel message system being logged. when the kernel fully panics, it just stops. Kernel panic in linux is rare, blue screen of death in windows is a kernel panic. It is possible the system recovers, deep enough in the kernel if you make mistakes, you are just done. you aren&#039;t just managing your own memory, you are managing on behalf of other processes. If a kernel is corrupt, it will corrupt everything on disk. Which would you rather have the kernel stop, or continue to do uncontrollable things to other parts of the system?&lt;br /&gt;
&lt;br /&gt;
Environment variables for X: &lt;br /&gt;
&lt;br /&gt;
new view - direct graphical output from another computer on the network - bad because of latency - new &lt;br /&gt;
&lt;br /&gt;
How to get around lag - run more of the code on the client instead of the server. Have the xclients have some code - transfer code to the xserver, to run on the server. Invisible website, downloads the page and it runs in your browser. Same thing different technology stack.&lt;br /&gt;
&lt;br /&gt;
Mechanisms vs. Policy - &lt;br /&gt;
&lt;br /&gt;
mechanisms - things to do things - the knobs that let us manipulate program state - should be maximally flexible so that they can implement whatever policies you want to do. &lt;br /&gt;
&lt;br /&gt;
policy are what you should do &lt;br /&gt;
&lt;br /&gt;
X Server &amp;lt;= mechanism&lt;br /&gt;
&lt;br /&gt;
window manager, toolkit &amp;lt;= policy&lt;br /&gt;
&lt;br /&gt;
Windows - two calls - create process () &amp;lt;--- many different parameters&lt;br /&gt;
&lt;br /&gt;
unix - fork() and execve(file, cmdline, env)&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_2&amp;diff=19350</id>
		<title>Operating Systems 2014F Lecture 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_2&amp;diff=19350"/>
		<updated>2014-10-06T17:43:50Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Audio from the lecture given on September 10, 2014 [http://homeostasis.scs.carleton.ca/~soma/os-2014f/lectures/comp3000-2014f-lec02-10Sep2014.mp3 is now available].&lt;br /&gt;
&lt;br /&gt;
{{{&lt;br /&gt;
machine state &lt;br /&gt;
program counter &lt;br /&gt;
process states &lt;br /&gt;
paging / swapping &lt;br /&gt;
process &lt;br /&gt;
running program&lt;br /&gt;
virtualizing&lt;br /&gt;
time/space sharing&lt;br /&gt;
mechanisms &lt;br /&gt;
policy&lt;br /&gt;
}}}&lt;br /&gt;
&lt;br /&gt;
Chapter 4 in the book:&lt;br /&gt;
&lt;br /&gt;
processes - key abstraction in a modern operating system&lt;br /&gt;
&lt;br /&gt;
Sitting all day kills you - seriously reduces your life expectancy. Working out, doesn&#039;t necessarily make up for sitting all day. If you walk around for 5 minutes every hour. Anyone use typing break programs? Occupational hazard of the career path you have chosen is that you sit in front of the computer typing. Anil started typing dvorak early in order to avoid repetitive strain injuries. &lt;br /&gt;
&lt;br /&gt;
[http://www.lcdf.org/xwrits/ xwrits] link to save your wrists on a *nix machine gives you hand gestures to say it&#039;s time to get up. you can get it to insult you with hand gestures. It does hand gesture depending on culture. In order to tell you to take a break.&lt;br /&gt;
&lt;br /&gt;
When typing it is important to take breaks.&lt;br /&gt;
&lt;br /&gt;
You need to distinguish between programs and processes. A program is inprecise in the context of O/S. &lt;br /&gt;
&lt;br /&gt;
Program is precise in context of operating systems - Your web browser is that a program?&lt;br /&gt;
Web browser is a lot of little programs, but makes up one big program. It is not a precise thing. What is precise is an executable. An executable is a file on disk that can be exec&#039;d. (Disks are no longer disks they are all kinds of things such as filestate, etc) This is the unix version of the statement. There is a system call - called execve - takes as one of it&#039;s parameters a file and that file is then loaded into a process obliterating whatever else was in the process. &lt;br /&gt;
&lt;br /&gt;
Code can take many forms in a computer systems, this is just one form of data.&lt;br /&gt;
&lt;br /&gt;
For example, you ahve a text file that has a javascript / perl program / something, that is a program, it is also a text document, the operating system kernel does not really recognize it as an executable. You cannot give it as an argument to the execve system call. It has to run it indirectly, it has to find another exec executable to run that code. You have executables and you have processes.&lt;br /&gt;
&lt;br /&gt;
A process - is an executable that has been executed - loaded into memory and started running. A process you should think of as an abstraction of a computer that can only run one program at a time. (Older personal computers, early 1960&#039;s or something, there is no abstraction of a process. There is no notion of running more than one program at a time. Logically speaking: when you wanted to run a program, all of memory would be loaded with that program, when you wanted to quit the program, you cut the power (turn the computer off).) They run one program at a time, you load it off the disk, and it has complete control of the machine. A process is the abstraction you get when you say, we don&#039;t want every program to have complete control of the computer because I do not want to have to reboot the computer to switch programs. I want to run different programs concurrently, for multiple reasons. Want to chain multiple programs in order to produce a result. (A Unix pipeline) The process - giving each running program (each executable) it&#039;s own virtual computer to run. &lt;br /&gt;
&lt;br /&gt;
Virtualizing / virtualization (term is rather overloaded) What am I talking about when I say virtual? Something that isn&#039;t real. It&#039;s not a real thing. When people talk about virtual reality, they are talking about something that can be experienced. What we are saying in a computer science context: When we say virtual, we are really talking about an abstraction - What we actually have, the real thing is not good enough, it doesn&#039;t have qualities that you want, so you want to transform it into something more useful (in some way). When we talk about a virtual machine, we are talking about a machine (computer) that does not exist, in the sense that it is not embodied in actual hardware. &lt;br /&gt;
&lt;br /&gt;
(from the theoretical side of computer science): All programming languages or programming system to a first approximation are equivalent, a system is known as Turing complete it can run anything. Turning one Turing complete system into another Turing complete system is the process of virtualization. The ones you&#039;ve often heard of are: Language Based Virtual machine - an example: java virtual machine. Really you could talk about any time you run a higher level language (perl, javascript, python, etc) That code does not run directly on the processor. It runs inside of another program which has some kind of virtual machine. Strictly speaking, a lot of languages can be interpreted, which means that you have a program that goes through line by line and figures out what that line is supposed to do and what the next instruction is. The point is that no modern language operates that way. What they all go through is some sort of translation phase, converts it to some binary code, and then it runs the byte code. That runtime is what&#039;s called a virtual machine. But virtual machines are everywhere when we are talking about trying to run programs. Operating systems can be thought of as implementing a virtual machine and that virtual machine it implements is the process. Key difference between a virtual machine that makes processes and the vm that is typically in these language based virtual machines. The difference between these is getting smaller. Any idea what this difference is?&lt;br /&gt;
&lt;br /&gt;
Java based Virtual Machine - executes byte codes.&lt;br /&gt;
hardware can&#039;t interpret byte code &lt;br /&gt;
&lt;br /&gt;
What is the nature of the binary format that is being run in an operating system process? What format is that code? - machine code - it&#039;s the code that is understood by the processor. Machine code here, byte code here, what&#039;s the difference? The hardware can&#039;t interpret this, This language the processor needs another program to translate. Why can&#039;t the processor understand java byte code? It could, there are chips that run java byte code natively. What&#039;s worse, the machine code that your processor understand? it actually doesn&#039;t. Modern processors such as x86 or x86-64 these are the most common things for a pc, arm byte code, arm machine language, that sort of thing. &lt;br /&gt;
&lt;br /&gt;
This language is too annoying to use internally inside the microprocessor, it&#039;s not efficient, it was not designed to run very fast. It actually has a front end that takes that code and translates it to another byte code. There have been processor startups, where instead of having it done directly on the chip, they put something like a java virtual machine on the processor. Why am I saying this? The virtual and the real in computer science are often hard to tell apart. Virtual to one group could actually be real to another group. When you are coding in java / c, that is the language, that is real to you. That is the abstraction you are working i. but there are actually other levels below you. That generally is not the real level. when you are dealing with millions of transistors, there is a lot of abstraction. The process is the virtual machine that you run processes in. You take a file and loads it to disk. there is a little problem with this concept: Program on disk, is it a one to one mapping between programs on disk, and programs in memory? Not at all! Most programs on disk are not running at any given time. A given program on disk can be running in many different processes, you can have multiple instances of the same program running at the same time. Logically in an API for an operating system, you have to distinguish between the creation of a process, and the loading of an executable into that process. because you want to be able to facilitate the many to many mapping. What does that API look like? I&#039;ll give you another funny thing, if you are running one program, can you make that program do multiple things at the same time? Yes, now there is this whole notion of threading, but a thread is not a process.&lt;br /&gt;
&lt;br /&gt;
==A thread is not a process.==&lt;br /&gt;
&lt;br /&gt;
Process = thread(s) + address space&lt;br /&gt;
&lt;br /&gt;
The cPU is virtual, because that would be really annoying to only have 4 things running at a time. Talking about things that are running at the same time. how many things are running @ the same time? Hyperthreading. What does it mean to run things at the same time, is it actually running? or is it logically running. we want the abstraction so that every process has it&#039;s own computer. That&#039;s what we are talking about when we talk about threads. Process has memory in which it can run, inside that process, how many processors do I have running? Classic only has 1. When you get to multithreaded processors, there is more than one logical solution. If you think about this? That&#039;s a mess! Having more than one program counter to track inside of one address space. this causes lots of problems! What happens when they step on each other. They do crazy things like chance teh loop index, from outside the loop. how do you reason about your code when things like this happen? When you put more than one logical cpu inside an address space that can happen. For a long time Operating systems, only supported processes with a single thread of execution. They supported lots of processes. Just made sure each one only had 1 thread.&lt;br /&gt;
&lt;br /&gt;
Having more than one cpu running around inside the address space. What happens if they try running the same code twice? Change the loop index in the loop from outside of the loop. The right way to think of this is don&#039;t do that. When you put more than one cpu inside the address space. For a long time operating system supported one cpu inside the address space. They supported lots of processes but only one cpu. That&#039;s kind of limiting, but why do you want your running programs to be sharing memory, the main reason you want them to share memory is to communicate. Shared memory has only one advantage - it can be very fast. How do you make sure you don&#039;t overwrite each other&#039;s messages? In modern computation, distributed systems, big systems - you do almost everything you can - when you share memory you put some sort of API on top of it to control access. The only problem is - other than having potential communication overhead, you have higher overhead in general, you now have an address space to keep track of, it&#039;s own version of memory for each running program. That&#039;s so much overhead, it was a good while before computers started to allow multithreading, because it takes a lot of transistors to do. Used to be you&#039;d have a Completely separate processor (a MMU) to take care of every running program in it&#039;s own address space, it is now integrated into cpus. &lt;br /&gt;
&lt;br /&gt;
A running program is a process, address space, plus one or more threads. That&#039;s the virtual machine in which you are running in. It&#039;s got machine code.  &lt;br /&gt;
&lt;br /&gt;
Did you see any assembly in 2401? Not really.&lt;br /&gt;
&lt;br /&gt;
=Sharing=&lt;br /&gt;
&lt;br /&gt;
Explain terminology. Time space sharing, when we talk about virtualizing resources, talking about virtualizing cpu, virtualizing ram, what we are actually talking about is sharing. Like on a playground, we need to place nicely together.&lt;br /&gt;
An operating system is a set of mechanisms and policies for allowing for time and space sharing of the (computer resources) processor. In Time sharing: (taking turns) The processor is a limited resource. one program gets it for a while, another gets it for another while, then another gets it for a while.  Space sharing means that you have all this RAM, split it up, you have this disk, split it up, one program gets part of it, and another program gets part of it. That&#039;s what we mean by space sharing. &lt;br /&gt;
&lt;br /&gt;
=Virtual memory and physical memory=&lt;br /&gt;
&lt;br /&gt;
[[File:Virtualmemory.png]]&lt;br /&gt;
&lt;br /&gt;
There&#039;s this distinction between virtual memory and physical memory. Physical memory, is the RAM your computer actually has. You buy those little chips, and plug them in (voila, it goes faster)! (Sims &amp;amp; DIMMS, you expand the RAM) you see them at startup, that&#039;s physical memory, it&#039;s a real thing, you get gigs of it now. Virtual memory is the memory each running program thinks it has. Memory is shared, we want to share it between multiple processes. A wierd consequence of this is how do we refer to memory, with addresses, with 4 gigs of ram, you get 4 million memory locations. 2^32 locations. That&#039;s a number of addresses, one way you could say to share a program running in memory. the first program you run, it will get address ranges 200 - 400, the second 500 - 800. when you load any program binary, you have to then change it to use the addresses it is supposed to use. What address range is it supposed to be? Where is it going to get? What if your program wants more RAM? It&#039;s a bit of a pain. It&#039;s annoying. What we actually have is physical addresses and virtual addresses: When you load a process, all of the memory references are referencing virtual addresses. Unless you have special hardware to accellerate it, you have to do some sort of table lookup. This is happening on every memory access. There&#039;s a lot of hardware, there&#039;s a lot of operating system mechanism to make this run pretty damn fast. That&#039;s what we mean, every process can have an address 2000, but the address 2000 is the virtual address 2000, each is mapped to a different physical address. (it&#039;s OVER 9000)&lt;br /&gt;
&lt;br /&gt;
It can be as big as you want it to be. When you talk about 64 bit processors vs. 32 bit processors. It&#039;s not the difference in how much physical memory you have, it&#039;s the virtual memory. It is much bigger. Do we have any computer with 2^64 bytes of RAM? No, it is a really big number. Not in our lifetime. Everyone has their own private address space. It&#039;s just like namespaces, when you write program, and run another program. Do you expect the x in one program to be the same in another? No. the scoping is between processes. These address spaces are the same thing, but they are mapped differently. The address space is fixed in size, because it&#039;s limited by the processor word size. How much of it is allocated to the program? Might have a program that&#039;s taking up 50 Mb, but I want some more memory, asking for more address space? Asking for part of your virtual address space to be turned into storage, ask the operating system, please can I have some more ram? (Can I have some more sir?)&lt;br /&gt;
&lt;br /&gt;
The operating system notices when you access parts of physical memory that you are not allowed to. It raises an exception, handles it, and you get a segmentation fault. Accessing memory you shouldn&#039;t be accessing. The operating system is very firm about it, if you don&#039;t handle it in your process, you die.&lt;br /&gt;
In modern operating systems, some programs are not equal. Just because you are root, does not mean you are in control! It si not the root user, it is the kernel? Can the kernel have a segmentation fault? A program running with full privileges can still have a segmentation fault. The kernel can also have segmentation fault. When this happens the machine will crash hard. &lt;br /&gt;
&lt;br /&gt;
[http://slacksite.com/slackware/oops.html For Oops vs. Panics]&lt;br /&gt;
&lt;br /&gt;
Environment variables for X: &lt;br /&gt;
&lt;br /&gt;
new view - direct graphical output from another computer on the network - bad because of latency - new &lt;br /&gt;
&lt;br /&gt;
How to get around lag - run more of the code on the client instead of the server. Have the xclients have some code - transfer code to the xserver, to run on the server. Invisible website, downloads the page and it runs in your browser. Same thing different technology stack.&lt;br /&gt;
&lt;br /&gt;
Mechanisms vs. Policy - &lt;br /&gt;
&lt;br /&gt;
mechanisms - things to do things - the knobs that let us manipulate program state - should be maximally flexible so that they can implement whatever policies you want to do. &lt;br /&gt;
&lt;br /&gt;
policy are what you should do &lt;br /&gt;
&lt;br /&gt;
X Server &amp;lt;= mechanism&lt;br /&gt;
&lt;br /&gt;
window manager, toolkit &amp;lt;= policy&lt;br /&gt;
&lt;br /&gt;
Windows - two calls - create process () &amp;lt;--- many different parameters&lt;br /&gt;
&lt;br /&gt;
unix - fork() and execve(file, cmdline, env)&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_2&amp;diff=19349</id>
		<title>Operating Systems 2014F Lecture 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_2&amp;diff=19349"/>
		<updated>2014-10-06T17:30:43Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Audio from the lecture given on September 10, 2014 [http://homeostasis.scs.carleton.ca/~soma/os-2014f/lectures/comp3000-2014f-lec02-10Sep2014.mp3 is now available].&lt;br /&gt;
&lt;br /&gt;
{{{&lt;br /&gt;
machine state &lt;br /&gt;
program counter &lt;br /&gt;
process states &lt;br /&gt;
paging / swapping &lt;br /&gt;
process &lt;br /&gt;
running program&lt;br /&gt;
virtualizing&lt;br /&gt;
time/space sharing&lt;br /&gt;
mechanisms &lt;br /&gt;
policy&lt;br /&gt;
}}}&lt;br /&gt;
&lt;br /&gt;
Chapter 4 in the book:&lt;br /&gt;
&lt;br /&gt;
processes - key abstraction in a modern operating system&lt;br /&gt;
&lt;br /&gt;
Sitting all day kills you - seriously reduces your life expectancy. Working out, doesn&#039;t necessarily make up for sitting all day. If you walk around for 5 minutes every hour. Anyone use typing break programs? Occupational hazard of the career path you have chosen is that you sit in front of the computer typing. Anil started typing dvorak early in order to avoid repetitive strain injuries. &lt;br /&gt;
&lt;br /&gt;
[http://www.lcdf.org/xwrits/ xwrits] link to save your wrists on a *nix machine gives you hand gestures to say it&#039;s time to get up. you can get it to insult you with hand gestures. It does hand gesture depending on culture. In order to tell you to take a break.&lt;br /&gt;
&lt;br /&gt;
When typing it is important to take breaks.&lt;br /&gt;
&lt;br /&gt;
You need to distinguish between programs and processes. A program is inprecise in the context of O/S. &lt;br /&gt;
&lt;br /&gt;
Program is precise in context of operating systems - Your web browser is that a program?&lt;br /&gt;
Web browser is a lot of little programs, but makes up one big program. It is not a precise thing. What is precise is an executable. An executable is a file on disk that can be exec&#039;d. (Disks are no longer disks they are all kinds of things such as filestate, etc) This is the unix version of the statement. There is a system call - called execve - takes as one of it&#039;s parameters a file and that file is then loaded into a process obliterating whatever else was in the process. &lt;br /&gt;
&lt;br /&gt;
Code can take many forms in a computer systems, this is just one form of data.&lt;br /&gt;
&lt;br /&gt;
For example, you ahve a text file that has a javascript / perl program / something, that is a program, it is also a text document, the operating system kernel does not really recognize it as an executable. You cannot give it as an argument to the execve system call. It has to run it indirectly, it has to find another exec executable to run that code. You have executables and you have processes.&lt;br /&gt;
&lt;br /&gt;
A process - is an executable that has been executed - loaded into memory and started running. A process you should think of as an abstraction of a computer that can only run one program at a time. (Older personal computers, early 1960&#039;s or something, there is no abstraction of a process. There is no notion of running more than one program at a time. Logically speaking: when you wanted to run a program, all of memory would be loaded with that program, when you wanted to quit the program, you cut the power (turn the computer off).) They run one program at a time, you load it off the disk, and it has complete control of the machine. A process is the abstraction you get when you say, we don&#039;t want every program to have complete control of the computer because I do not want to have to reboot the computer to switch programs. I want to run different programs concurrently, for multiple reasons. Want to chain multiple programs in order to produce a result. (A Unix pipeline) The process - giving each running program (each executable) it&#039;s own virtual computer to run. &lt;br /&gt;
&lt;br /&gt;
Virtualizing / virtualization (term is rather overloaded) What am I talking about when I say virtual? Something that isn&#039;t real. It&#039;s not a real thing. When people talk about virtual reality, they are talking about something that can be experienced. What we are saying in a computer science context: When we say virtual, we are really talking about an abstraction - What we actually have, the real thing is not good enough, it doesn&#039;t have qualities that you want, so you want to transform it into something more useful (in some way). When we talk about a virtual machine, we are talking about a machine (computer) that does not exist, in the sense that it is not embodied in actual hardware. &lt;br /&gt;
&lt;br /&gt;
(from the theoretical side of computer science): All programming languages or programming system to a first approximation are equivalent, a system is known as Turing complete it can run anything. Turning one Turing complete system into another Turing complete system is the process of virtualization. The ones you&#039;ve often heard of are: Language Based Virtual machine - an example: java virtual machine. Really you could talk about any time you run a higher level language (perl, javascript, python, etc) That code does not run directly on the processor. It runs inside of another program which has some kind of virtual machine. Strictly speaking, a lot of languages can be interpreted, which means that you have a program that goes through line by line and figures out what that line is supposed to do and what the next instruction is. The point is that no modern language operates that way. What they all go through is some sort of translation phase, converts it to some binary code, and then it runs the byte code. That runtime is what&#039;s called a virtual machine. But virtual machines are everywhere when we are talking about trying to run programs. Operating systems can be thought of as implementing a virtual machine and that virtual machine it implements is the process. Key difference between a virtual machine that makes processes and the vm that is typically in these language based virtual machines. The difference between these is getting smaller. Any idea what this difference is?&lt;br /&gt;
&lt;br /&gt;
Java based Virtual Machine - executes byte codes.&lt;br /&gt;
hardware can&#039;t interpret byte code &lt;br /&gt;
&lt;br /&gt;
What is the nature of the binary format that is being run in an operating system process? What format is that code? - machine code - it&#039;s the code that is understood by the processor. Machine code here, byte code here, what&#039;s the difference? The hardware can&#039;t interpret this, This language the processor needs another program to translate. Why can&#039;t the processor understand java byte code? It could, there are chips that run java byte code natively. What&#039;s worse, the machine code that your processor understand? it actually doesn&#039;t. Modern processors such as x86 or x86-64 these are the most common things for a pc, arm byte code, arm machine language, that sort of thing. &lt;br /&gt;
&lt;br /&gt;
This language is too annoying to use internally inside the microprocessor, it&#039;s not efficient, it was not designed to run very fast. It actually has a front end that takes that code and translates it to another byte code. There have been processor startups, where instead of having it done directly on the chip, they put something like a java virtual machine on the processor. Why am I saying this? The virtual and the real in computer science are often hard to tell apart. Virtual to one group could actually be real to another group. When you are coding in java / c, that is the language, that is real to you. That is the abstraction you are working i. but there are actually other levels below you. That generally is not the real level. when you are dealing with millions of transistors, there is a lot of abstraction. The process is the virtual machine that you run processes in. You take a file and loads it to disk. there is a little problem with this concept: Program on disk, is it a one to one mapping between programs on disk, and programs in memory? Not at all! Most programs on disk are not running at any given time. A given program on disk can be running in many different processes, you can have multiple instances of the same program running at the same time. Logically in an API for an operating system, you have to distinguish between the creation of a process, and the loading of an executable into that process. because you want to be able to facilitate the many to many mapping. What does that API look like? I&#039;ll give you another funny thing, if you are running one program, can you make that program do multiple things at the same time? Yes, now there is this whole notion of threading, but a thread is not a process.&lt;br /&gt;
&lt;br /&gt;
==A thread is not a process.==&lt;br /&gt;
&lt;br /&gt;
Process = thread(s) + address space&lt;br /&gt;
&lt;br /&gt;
The cPU is virtual, because that would be really annoying to only have 4 things running at a time. Talking about things that are running at the same time. how many things are running @ the same time? Hyperthreading. What does it mean to run things at the same time, is it actually running? or is it logically running. we want the abstraction so that every process has it&#039;s own computer. That&#039;s what we are talking about when we talk about threads. Process has memory in which it can run, inside that process, how many processors do I have running? Classic only has 1. When you get to multithreaded processors, there is more than one logical solution. If you think about this? That&#039;s a mess! Having more than one program counter to track inside of one address space. this causes lots of problems! What happens when they step on each other. They do crazy things like chance teh loop index, from outside the loop. how do you reason about your code when things like this happen? When you put more than one logical cpu inside an address space that can happen. For a long time Operating systems, only supported processes with a single thread of execution. They supported lots of processes. Just made sure each one only had 1 thread.&lt;br /&gt;
&lt;br /&gt;
Having more than one cpu running around inside the address space. What happens if they try running the same code twice? Change the loop index in the loop from outside of the loop. The right way to think of this is don&#039;t do that. When you put more than one cpu inside the address space. For a long time operating system supported one cpu inside the address space. They supported lots of processes but only one cpu. That&#039;s kind of limiting, but why do you want your running programs to be sharing memory, the main reason you want them to share memory is to communicate. Shared memory has only one advantage - it can be very fast. How do you make sure you don&#039;t overwrite each other&#039;s messages? In modern computation, distributed systems, big systems - you do almost everything you can - when you share memory you put some sort of API on top of it to control access. The only problem is - other than having potential communication overhead, you have higher overhead in general, you now have an address space to keep track of, it&#039;s own version of memory for each running program. That&#039;s so much overhead, it was a good while before computers started to allow multithreading, because it takes a lot of transistors to do. Used to be you&#039;d have a Completely separate processor (a MMU) to take care of every running program in it&#039;s own address space, it is now integrated into cpus. &lt;br /&gt;
&lt;br /&gt;
A running program is a process, address space, plus one or more threads. That&#039;s the virtual machine in which you are running in. It&#039;s got machine code.  &lt;br /&gt;
&lt;br /&gt;
Did you see any assembly in 2401? Not really.&lt;br /&gt;
&lt;br /&gt;
=Sharing=&lt;br /&gt;
&lt;br /&gt;
Explain terminology. Time space sharing, when we talk about virtualizing resources, talking about virtualizing cpu, virtualizing ram, what we are actually talking about is sharing. Like on a playground, we need to place nicely together.&lt;br /&gt;
An operating system is a set of mechanisms and policies for allowing for time and space sharing of the (computer resources) processor. In Time sharing: (taking turns) The processor is a limited resource. one program gets it for a while, another gets it for another while, then another gets it for a while.  Space sharing means that you have all this RAM, split it up, you have this disk, split it up, one program gets part of it, and another program gets part of it. That&#039;s what we mean by space sharing. &lt;br /&gt;
&lt;br /&gt;
=Virtual memory and physical memory=&lt;br /&gt;
&lt;br /&gt;
[[File:Virtualmemory.png]]&lt;br /&gt;
&lt;br /&gt;
There&#039;s this distinction between virtual memory and physical memory. Physical memory, is the RAM your computer actually has. You buy those little chips, and plug them in (voila, it goes faster)!&lt;br /&gt;
A program running with full privileges can still have a segmentation fault. The kernel can also have segmentation fault. When this happens the machine will crash hard. &lt;br /&gt;
&lt;br /&gt;
[http://slacksite.com/slackware/oops.html For Oops vs. Panics]&lt;br /&gt;
&lt;br /&gt;
Environment variables for X: &lt;br /&gt;
&lt;br /&gt;
new view - direct graphical output from another computer on the network - bad because of latency - new &lt;br /&gt;
&lt;br /&gt;
How to get around lag - run more of the code on the client instead of the server. Have the xclients have some code - transfer code to the xserver, to run on the server. Invisible website, downloads the page and it runs in your browser. Same thing different technology stack.&lt;br /&gt;
&lt;br /&gt;
Mechanisms vs. Policy - &lt;br /&gt;
&lt;br /&gt;
mechanisms - things to do things - the knobs that let us manipulate program state - should be maximally flexible so that they can implement whatever policies you want to do. &lt;br /&gt;
&lt;br /&gt;
policy are what you should do &lt;br /&gt;
&lt;br /&gt;
X Server &amp;lt;= mechanism&lt;br /&gt;
&lt;br /&gt;
window manager, toolkit &amp;lt;= policy&lt;br /&gt;
&lt;br /&gt;
Windows - two calls - create process () &amp;lt;--- many different parameters&lt;br /&gt;
&lt;br /&gt;
unix - fork() and execve(file, cmdline, env)&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_2&amp;diff=19348</id>
		<title>Operating Systems 2014F Lecture 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_2&amp;diff=19348"/>
		<updated>2014-10-06T17:27:34Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Audio from the lecture given on September 10, 2014 [http://homeostasis.scs.carleton.ca/~soma/os-2014f/lectures/comp3000-2014f-lec02-10Sep2014.mp3 is now available].&lt;br /&gt;
&lt;br /&gt;
{{{&lt;br /&gt;
machine state &lt;br /&gt;
program counter &lt;br /&gt;
process states &lt;br /&gt;
paging / swapping &lt;br /&gt;
process &lt;br /&gt;
running program&lt;br /&gt;
virtualizing&lt;br /&gt;
time/space sharing&lt;br /&gt;
mechanisms &lt;br /&gt;
policy&lt;br /&gt;
}}}&lt;br /&gt;
&lt;br /&gt;
Chapter 4 in the book:&lt;br /&gt;
&lt;br /&gt;
processes - key abstraction in a modern operating system&lt;br /&gt;
&lt;br /&gt;
Sitting all day kills you - seriously reduces your life expectancy. Working out, doesn&#039;t necessarily make up for sitting all day. If you walk around for 5 minutes every hour. Anyone use typing break programs? Occupational hazard of the career path you have chosen is that you sit in front of the computer typing. Anil started typing dvorak early in order to avoid repetitive strain injuries. &lt;br /&gt;
&lt;br /&gt;
[http://www.lcdf.org/xwrits/ xwrits] link to save your wrists on a *nix machine gives you hand gestures to say it&#039;s time to get up. you can get it to insult you with hand gestures. It does hand gesture depending on culture. In order to tell you to take a break.&lt;br /&gt;
&lt;br /&gt;
When typing it is important to take breaks.&lt;br /&gt;
&lt;br /&gt;
You need to distinguish between programs and processes. A program is inprecise in the context of O/S. &lt;br /&gt;
&lt;br /&gt;
Program is precise in context of operating systems - Your web browser is that a program?&lt;br /&gt;
Web browser is a lot of little programs, but makes up one big program. It is not a precise thing. What is precise is an executable. An executable is a file on disk that can be exec&#039;d. (Disks are no longer disks they are all kinds of things such as filestate, etc) This is the unix version of the statement. There is a system call - called execve - takes as one of it&#039;s parameters a file and that file is then loaded into a process obliterating whatever else was in the process. &lt;br /&gt;
&lt;br /&gt;
Code can take many forms in a computer systems, this is just one form of data.&lt;br /&gt;
&lt;br /&gt;
For example, you ahve a text file that has a javascript / perl program / something, that is a program, it is also a text document, the operating system kernel does not really recognize it as an executable. You cannot give it as an argument to the execve system call. It has to run it indirectly, it has to find another exec executable to run that code. You have executables and you have processes.&lt;br /&gt;
&lt;br /&gt;
A process - is an executable that has been executed - loaded into memory and started running. A process you should think of as an abstraction of a computer that can only run one program at a time. (Older personal computers, early 1960&#039;s or something, there is no abstraction of a process. There is no notion of running more than one program at a time. Logically speaking: when you wanted to run a program, all of memory would be loaded with that program, when you wanted to quit the program, you cut the power (turn the computer off).) They run one program at a time, you load it off the disk, and it has complete control of the machine. A process is the abstraction you get when you say, we don&#039;t want every program to have complete control of the computer because I do not want to have to reboot the computer to switch programs. I want to run different programs concurrently, for multiple reasons. Want to chain multiple programs in order to produce a result. (A Unix pipeline) The process - giving each running program (each executable) it&#039;s own virtual computer to run. &lt;br /&gt;
&lt;br /&gt;
Virtualizing / virtualization (term is rather overloaded) What am I talking about when I say virtual? Something that isn&#039;t real. It&#039;s not a real thing. When people talk about virtual reality, they are talking about something that can be experienced. What we are saying in a computer science context: When we say virtual, we are really talking about an abstraction - What we actually have, the real thing is not good enough, it doesn&#039;t have qualities that you want, so you want to transform it into something more useful (in some way). When we talk about a virtual machine, we are talking about a machine (computer) that does not exist, in the sense that it is not embodied in actual hardware. &lt;br /&gt;
&lt;br /&gt;
(from the theoretical side of computer science): All programming languages or programming system to a first approximation are equivalent, a system is known as Turing complete it can run anything. Turning one Turing complete system into another Turing complete system is the process of virtualization. The ones you&#039;ve often heard of are: Language Based Virtual machine - an example: java virtual machine. Really you could talk about any time you run a higher level language (perl, javascript, python, etc) That code does not run directly on the processor. It runs inside of another program which has some kind of virtual machine. Strictly speaking, a lot of languages can be interpreted, which means that you have a program that goes through line by line and figures out what that line is supposed to do and what the next instruction is. The point is that no modern language operates that way. What they all go through is some sort of translation phase, converts it to some binary code, and then it runs the byte code. That runtime is what&#039;s called a virtual machine. But virtual machines are everywhere when we are talking about trying to run programs. Operating systems can be thought of as implementing a virtual machine and that virtual machine it implements is the process. Key difference between a virtual machine that makes processes and the vm that is typically in these language based virtual machines. The difference between these is getting smaller. Any idea what this difference is?&lt;br /&gt;
&lt;br /&gt;
Java based Virtual Machine - executes byte codes.&lt;br /&gt;
hardware can&#039;t interpret byte code &lt;br /&gt;
&lt;br /&gt;
What is the nature of the binary format that is being run in an operating system process? What format is that code? - machine code - it&#039;s the code that is understood by the processor. Machine code here, byte code here, what&#039;s the difference? The hardware can&#039;t interpret this, This language the processor needs another program to translate. Why can&#039;t the processor understand java byte code? It could, there are chips that run java byte code natively. What&#039;s worse, the machine code that your processor understand? it actually doesn&#039;t. Modern processors such as x86 or x86-64 these are the most common things for a pc, arm byte code, arm machine language, that sort of thing. &lt;br /&gt;
&lt;br /&gt;
This language is too annoying to use internally inside the microprocessor, it&#039;s not efficient, it was not designed to run very fast. It actually has a front end that takes that code and translates it to another byte code. There have been processor startups, where instead of having it done directly on the chip, they put something like a java virtual machine on the processor. Why am I saying this? The virtual and the real in computer science are often hard to tell apart. Virtual to one group could actually be real to another group. When you are coding in java / c, that is the language, that is real to you. That is the abstraction you are working i. but there are actually other levels below you. That generally is not the real level. when you are dealing with millions of transistors, there is a lot of abstraction. The process is the virtual machine that you run processes in. You take a file and loads it to disk. there is a little problem with this concept: Program on disk, is it a one to one mapping between programs on disk, and programs in memory? Not at all! Most programs on disk are not running at any given time. A given program on disk can be running in many different processes, you can have multiple instances of the same program running at the same time. Logically in an API for an operating system, you have to distinguish between the creation of a process, and the loading of an executable into that process. because you want to be able to facilitate the many to many mapping. What does that API look like? I&#039;ll give you another funny thing, if you are running one program, can you make that program do multiple things at the same time? Yes, now there is this whole notion of threading, but a thread is not a process.&lt;br /&gt;
&lt;br /&gt;
==A thread is not a process.==&lt;br /&gt;
&lt;br /&gt;
Process = thread(s) + address space&lt;br /&gt;
&lt;br /&gt;
The cPU is virtual, because that would be really annoying to only have 4 things running at a time. Talking about things that are running at the same time. how many things are running @ the same time? Hyperthreading. What does it mean to run things at the same time, is it actually running? or is it logically running. we want the abstraction so that every process has it&#039;s own computer. That&#039;s what we are talking about when we talk about threads. Process has memory in which it can run, inside that process, how many processors do I have running? Classic only has 1. When you get to multithreaded processors, there is more than one logical solution. If you think about this? That&#039;s a mess! Having more than one program counter to track inside of one address space. this causes lots of problems! What happens when they step on each other. They do crazy things like chance teh loop index, from outside the loop. how do you reason about your code when things like this happen? When you put more than one logical cpu inside an address space that can happen. For a long time Operating systems, only supported processes with a single thread of execution. They supported lots of processes. Just made sure each one only had 1 thread.&lt;br /&gt;
&lt;br /&gt;
Having more than one cpu running around inside the address space. What happens if they try running the same code twice? Change the loop index in the loop from outside of the loop. The right way to think of this is don&#039;t do that. When you put more than one cpu inside the address space. For a long time operating system supported one cpu inside the address space. They supported lots of processes but only one cpu. That&#039;s kind of limiting, but why do you want your running programs to be sharing memory, the main reason you want them to share memory is to communicate. Shared memory has only one advantage - it can be very fast. How do you make sure you don&#039;t overwrite each other&#039;s messages? In modern computation, distributed systems, big systems - you do almost everything you can - when you share memory you put some sort of API on top of it to control access. The only problem is - other than having potential communication overhead, you have higher overhead in general, you now have an address space to keep track of, it&#039;s own version of memory for each running program. That&#039;s so much overhead, it was a good while before computers started to allow multithreading, because it takes a lot of transistors to do. Used to be you&#039;d have a Completely separate processor (a MMU) to take care of every running program in it&#039;s own address space, it is now integrated into cpus. &lt;br /&gt;
&lt;br /&gt;
A running program is a process, address space, plus one or more threads. That&#039;s the virtual machine in which you are running in. It&#039;s got machine code.  &lt;br /&gt;
&lt;br /&gt;
Did you see any assembly in 2401? Not really.&lt;br /&gt;
&lt;br /&gt;
Explain terminology. Time space sharing, when we talk about virtualizing resources, talking about virtualizing cpu, virtualizing ram, what we are actually talking about is sharing. Like on a playground, we need to place nicely together.&lt;br /&gt;
An operating system is a set of mechanisms and policies for allowing for time and space sharing of the (computer resources) processor. In Time sharing: (taking turns) The processor is a limited resource. one program gets it for a while, another gets it for another while, then another gets it for a while.  Space sharing means that you have all this RAM, split it up, you have this disk, split it up, one program gets part of it, and another program gets part of it. That&#039;s what we mean by space sharing. &lt;br /&gt;
&lt;br /&gt;
Virtual memory and physical memory&lt;br /&gt;
&lt;br /&gt;
[[File:Virtualmemory.png]]&lt;br /&gt;
&lt;br /&gt;
A program running with full privileges can still have a segmentation fault. The kernel can also have segmentation fault. When this happens the machine will crash hard. &lt;br /&gt;
&lt;br /&gt;
[http://slacksite.com/slackware/oops.html For Oops vs. Panics]&lt;br /&gt;
&lt;br /&gt;
Environment variables for X: &lt;br /&gt;
&lt;br /&gt;
new view - direct graphical output from another computer on the network - bad because of latency - new &lt;br /&gt;
&lt;br /&gt;
How to get around lag - run more of the code on the client instead of the server. Have the xclients have some code - transfer code to the xserver, to run on the server. Invisible website, downloads the page and it runs in your browser. Same thing different technology stack.&lt;br /&gt;
&lt;br /&gt;
Mechanisms vs. Policy - &lt;br /&gt;
&lt;br /&gt;
mechanisms - things to do things - the knobs that let us manipulate program state - should be maximally flexible so that they can implement whatever policies you want to do. &lt;br /&gt;
&lt;br /&gt;
policy are what you should do &lt;br /&gt;
&lt;br /&gt;
X Server &amp;lt;= mechanism&lt;br /&gt;
&lt;br /&gt;
window manager, toolkit &amp;lt;= policy&lt;br /&gt;
&lt;br /&gt;
Windows - two calls - create process () &amp;lt;--- many different parameters&lt;br /&gt;
&lt;br /&gt;
unix - fork() and execve(file, cmdline, env)&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_2&amp;diff=19347</id>
		<title>Operating Systems 2014F Lecture 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_2&amp;diff=19347"/>
		<updated>2014-10-06T17:09:42Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Audio from the lecture given on September 10, 2014 [http://homeostasis.scs.carleton.ca/~soma/os-2014f/lectures/comp3000-2014f-lec02-10Sep2014.mp3 is now available].&lt;br /&gt;
&lt;br /&gt;
{{{&lt;br /&gt;
machine state &lt;br /&gt;
program counter &lt;br /&gt;
process states &lt;br /&gt;
paging / swapping &lt;br /&gt;
process &lt;br /&gt;
running program&lt;br /&gt;
virtualizing&lt;br /&gt;
time/space sharing&lt;br /&gt;
mechanisms &lt;br /&gt;
policy&lt;br /&gt;
}}}&lt;br /&gt;
&lt;br /&gt;
Chapter 4 in the book:&lt;br /&gt;
&lt;br /&gt;
processes - key abstraction in a modern operating system&lt;br /&gt;
&lt;br /&gt;
Sitting all day kills you - seriously reduces your life expectancy. Working out, doesn&#039;t necessarily make up for sitting all day. If you walk around for 5 minutes every hour. Anyone use typing break programs? Occupational hazard of the career path you have chosen is that you sit in front of the computer typing. Anil started typing dvorak early in order to avoid repetitive strain injuries. &lt;br /&gt;
&lt;br /&gt;
[http://www.lcdf.org/xwrits/ xwrits] link to save your wrists on a *nix machine gives you hand gestures to say it&#039;s time to get up. you can get it to insult you with hand gestures. It does hand gesture depending on culture. In order to tell you to take a break.&lt;br /&gt;
&lt;br /&gt;
When typing it is important to take breaks.&lt;br /&gt;
&lt;br /&gt;
You need to distinguish between programs and processes. A program is inprecise in the context of O/S. &lt;br /&gt;
&lt;br /&gt;
Program is precise in context of operating systems - Your web browser is that a program?&lt;br /&gt;
Web browser is a lot of little programs, but makes up one big program. It is not a precise thing. What is precise is an executable. An executable is a file on disk that can be exec&#039;d. (Disks are no longer disks they are all kinds of things such as filestate, etc) This is the unix version of the statement. There is a system call - called execve - takes as one of it&#039;s parameters a file and that file is then loaded into a process obliterating whatever else was in the process. &lt;br /&gt;
&lt;br /&gt;
Code can take many forms in a computer systems, this is just one form of data.&lt;br /&gt;
&lt;br /&gt;
For example, you ahve a text file that has a javascript / perl program / something, that is a program, it is also a text document, the operating system kernel does not really recognize it as an executable. You cannot give it as an argument to the execve system call. It has to run it indirectly, it has to find another exec executable to run that code. You have executables and you have processes.&lt;br /&gt;
&lt;br /&gt;
A process - is an executable that has been executed - loaded into memory and started running. A process you should think of as an abstraction of a computer that can only run one program at a time. (Older personal computers, early 1960&#039;s or something, there is no abstraction of a process. There is no notion of running more than one program at a time. Logically speaking: when you wanted to run a program, all of memory would be loaded with that program, when you wanted to quit the program, you cut the power (turn the computer off).) They run one program at a time, you load it off the disk, and it has complete control of the machine. A process is the abstraction you get when you say, we don&#039;t want every program to have complete control of the computer because I do not want to have to reboot the computer to switch programs. I want to run different programs concurrently, for multiple reasons. Want to chain multiple programs in order to produce a result. (A Unix pipeline) The process - giving each running program (each executable) it&#039;s own virtual computer to run. &lt;br /&gt;
&lt;br /&gt;
Virtualizing / virtualization (term is rather overloaded) What am I talking about when I say virtual? Something that isn&#039;t real. It&#039;s not a real thing. When people talk about virtual reality, they are talking about something that can be experienced. What we are saying in a computer science context: When we say virtual, we are really talking about an abstraction - What we actually have, the real thing is not good enough, it doesn&#039;t have qualities that you want, so you want to transform it into something more useful (in some way). When we talk about a virtual machine, we are talking about a machine (computer) that does not exist, in the sense that it is not embodied in actual hardware. &lt;br /&gt;
&lt;br /&gt;
(from the theoretical side of computer science): All programming languages or programming system to a first approximation are equivalent, a system is known as Turing complete it can run anything. Turning one Turing complete system into another Turing complete system is the process of virtualization. The ones you&#039;ve often heard of are: Language Based Virtual machine - an example: java virtual machine. Really you could talk about any time you run a higher level language (perl, javascript, python, etc) That code does not run directly on the processor. It runs inside of another program which has some kind of virtual machine. Strictly speaking, a lot of languages can be interpreted, which means that you have a program that goes through line by line and figures out what that line is supposed to do and what the next instruction is. The point is that no modern language operates that way. What they all go through is some sort of translation phase, converts it to some binary code, and then it runs the byte code. That runtime is what&#039;s called a virtual machine. But virtual machines are everywhere when we are talking about trying to run programs. Operating systems can be thought of as implementing a virtual machine and that virtual machine it implements is the process. Key difference between a virtual machine that makes processes and the vm that is typically in these language based virtual machines. The difference between these is getting smaller. Any idea what this difference is?&lt;br /&gt;
&lt;br /&gt;
Java based Virtual Machine - executes byte codes.&lt;br /&gt;
hardware can&#039;t interpret byte code &lt;br /&gt;
&lt;br /&gt;
What is the nature of the binary format that is being run in an operating system process? What format is that code? - machine code - it&#039;s the code that is understood by the processor. Machine code here, byte code here, what&#039;s the difference? The hardware can&#039;t interpret this, This language the processor needs another program to translate. Why can&#039;t the processor understand java byte code? It could, there are chips that run java byte code natively. What&#039;s worse, the machine code that your processor understand? it actually doesn&#039;t. Modern processors such as x86 or x86-64 these are the most common things for a pc, arm byte code, arm machine language, that sort of thing. &lt;br /&gt;
&lt;br /&gt;
This language is too annoying to use internally inside the microprocessor, it&#039;s not efficient, it was not designed to run very fast. It actually has a front end that takes that code and translates it to another byte code. There have been processor startups, where instead of having it done directly on the chip, they put something like a java virtual machine on the processor. Why am I saying this? The virtual and the real in computer science are often hard to tell apart. Virtual to one group could actually be real to another group. When you are coding in java / c, that is the language, that is real to you. That is the abstraction you are working i. but there are actually other levels below you. That generally is not the real level. when you are dealing with millions of transistors, there is a lot of abstraction. The process is the virtual machine that you run processes in. You take a file and loads it to disk. there is a little problem with this concept: Program on disk, is it a one to one mapping between programs on disk, and programs in memory? Not at all! Most programs on disk are not running at any given time. A given program on disk can be running in many different processes, you can have multiple instances of the same program running at the same time. Logically in an API for an operating system, you have to distinguish between the creation of a process, and the loading of an executable into that process. because you want to be able to facilitate the many to many mapping. What does that API look like? I&#039;ll give you another funny thing, if you are running one program, can you make that program do multiple things at the same time? Yes, now there is this whole notion of threading, but a thread is not a process.&lt;br /&gt;
&lt;br /&gt;
==A thread is not a process.==&lt;br /&gt;
&lt;br /&gt;
Process = thread(s) + address space&lt;br /&gt;
&lt;br /&gt;
The cPU is virtual, because that would be really annoying to only have 4 things running at a time. Talking about things that are running at the same time.&lt;br /&gt;
Having more than one cpu running around inside the address space. What happens if they try running the same code twice? Change the loop index in the loop from outside of the loop. The right way to think of this is don&#039;t do that. When you put more than one cpu inside the address space. For a long time operating system supported one cpu inside the address space. They supported lots of processes but only one cpu. That&#039;s kind of limiting, but why do you want your running programs to be sharing memory, the main reason you want them to share memory is to communicate. Shared memory has one advantage - it can be very fast. How do you make sure you don&#039;t overwrite each other&#039;s messages. &lt;br /&gt;
&lt;br /&gt;
Modern computation - big systems - you do almost everything you can - when you share memory you put some sort of API on top of it to control access. The only problem is - other than having potential overhead. &lt;br /&gt;
&lt;br /&gt;
an operating system is a set of mechanisms and policies to allow for time and space sharing of the processor. The processor is a limited resource. one program gets it for a while, another gets it for another while.  Space sharing means you have RAM, and memory. &lt;br /&gt;
&lt;br /&gt;
Virtual memory and physical memory&lt;br /&gt;
&lt;br /&gt;
[[File:Virtualmemory.png]]&lt;br /&gt;
&lt;br /&gt;
A program running with full privileges can still have a segmentation fault. The kernel can also have segmentation fault. When this happens the machine will crash hard. &lt;br /&gt;
&lt;br /&gt;
[http://slacksite.com/slackware/oops.html For Oops vs. Panics]&lt;br /&gt;
&lt;br /&gt;
Environment variables for X: &lt;br /&gt;
&lt;br /&gt;
new view - direct graphical output from another computer on the network - bad because of latency - new &lt;br /&gt;
&lt;br /&gt;
How to get around lag - run more of the code on the client instead of the server. Have the xclients have some code - transfer code to the xserver, to run on the server. Invisible website, downloads the page and it runs in your browser. Same thing different technology stack.&lt;br /&gt;
&lt;br /&gt;
Mechanisms vs. Policy - &lt;br /&gt;
&lt;br /&gt;
mechanisms - things to do things - the knobs that let us manipulate program state - should be maximally flexible so that they can implement whatever policies you want to do. &lt;br /&gt;
&lt;br /&gt;
policy are what you should do &lt;br /&gt;
&lt;br /&gt;
X Server &amp;lt;= mechanism&lt;br /&gt;
&lt;br /&gt;
window manager, toolkit &amp;lt;= policy&lt;br /&gt;
&lt;br /&gt;
Windows - two calls - create process () &amp;lt;--- many different parameters&lt;br /&gt;
&lt;br /&gt;
unix - fork() and execve(file, cmdline, env)&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_2&amp;diff=19346</id>
		<title>Operating Systems 2014F Lecture 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2014F_Lecture_2&amp;diff=19346"/>
		<updated>2014-10-06T16:58:44Z</updated>

		<summary type="html">&lt;p&gt;Afry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Audio from the lecture given on September 10, 2014 [http://homeostasis.scs.carleton.ca/~soma/os-2014f/lectures/comp3000-2014f-lec02-10Sep2014.mp3 is now available].&lt;br /&gt;
&lt;br /&gt;
{{{&lt;br /&gt;
machine state &lt;br /&gt;
program counter &lt;br /&gt;
process states &lt;br /&gt;
paging / swapping &lt;br /&gt;
process &lt;br /&gt;
running program&lt;br /&gt;
virtualizing&lt;br /&gt;
time/space sharing&lt;br /&gt;
mechanisms &lt;br /&gt;
policy&lt;br /&gt;
}}}&lt;br /&gt;
&lt;br /&gt;
Chapter 4 in the book:&lt;br /&gt;
&lt;br /&gt;
processes - key abstraction in a modern operating system&lt;br /&gt;
&lt;br /&gt;
Sitting all day kills you - seriously reduces your life expectancy. Working out, doesn&#039;t necessarily make up for sitting all day. If you walk around for 5 minutes every hour. Anyone use typing break programs? Occupational hazard of the career path you have chosen is that you sit in front of the computer typing. Anil started typing dvorak early in order to avoid repetitive strain injuries. &lt;br /&gt;
&lt;br /&gt;
[http://www.lcdf.org/xwrits/ xwrits] link to save your wrists on a *nix machine gives you hand gestures to say it&#039;s time to get up. you can get it to insult you with hand gestures. It does hand gesture depending on culture. In order to tell you to take a break.&lt;br /&gt;
&lt;br /&gt;
When typing it is important to take breaks.&lt;br /&gt;
&lt;br /&gt;
You need to distinguish between programs and processes. A program is inprecise in the context of O/S. &lt;br /&gt;
&lt;br /&gt;
Program is precise in context of operating systems - Your web browser is that a program?&lt;br /&gt;
Web browser is a lot of little programs, but makes up one big program. It is not a precise thing. What is precise is an executable. An executable is a file on disk that can be exec&#039;d. (Disks are no longer disks they are all kinds of things such as filestate, etc) This is the unix version of the statement. There is a system call - called execve - takes as one of it&#039;s parameters a file and that file is then loaded into a process obliterating whatever else was in the process. &lt;br /&gt;
&lt;br /&gt;
Code can take many forms in a computer systems, this is just one form of data.&lt;br /&gt;
&lt;br /&gt;
For example, you ahve a text file that has a javascript / perl program / something, that is a program, it is also a text document, the operating system kernel does not really recognize it as an executable. You cannot give it as an argument to the execve system call. It has to run it indirectly, it has to find another exec executable to run that code. You have executables and you have processes.&lt;br /&gt;
&lt;br /&gt;
A process - is an executable that has been executed - loaded into memory and started running. A process you should think of as an abstraction of a computer that can only run one program at a time. (Older personal computers, early 1960&#039;s or something, there is no abstraction of a process. There is no notion of running more than one program at a time. Logically speaking: when you wanted to run a program, all of memory would be loaded with that program, when you wanted to quit the program, you cut the power (turn the computer off).) They run one program at a time, you load it off the disk, and it has complete control of the machine. A process is the abstraction you get when you say, we don&#039;t want every program to have complete control of the computer because I do not want to have to reboot the computer to switch programs. I want to run different programs concurrently, for multiple reasons. Want to chain multiple programs in order to produce a result. (A Unix pipeline) The process - giving each running program (each executable) it&#039;s own virtual computer to run. &lt;br /&gt;
&lt;br /&gt;
Virtualizing / virtualization (term is rather overloaded) What am I talking about when I say virtual? Something that isn&#039;t real. It&#039;s not a real thing. When people talk about virtual reality, they are talking about something that can be experienced. What we are saying in a computer science context: When we say virtual, we are really talking about an abstraction - What we actually have, the real thing is not good enough, it doesn&#039;t have qualities that you want, so you want to transform it into something more useful (in some way). When we talk about a virtual machine, we are talking about a machine (computer) that does not exist, in the sense that it is not embodied in actual hardware. &lt;br /&gt;
&lt;br /&gt;
(from the theoretical side of computer science): All programming languages or programming system to a first approximation are equivalent, a system is known as Turing complete it can run anything. Turning one Turing complete system into another Turing complete system is the process of virtualization. The ones you&#039;ve often heard of are: Language Based Virtual machine - an example: java virtual machine. Really you could talk about any time you run a higher level language (perl, javascript, python, etc) That code does not run directly on the processor. It runs inside of another program which has some kind of virtual machine. Strictly speaking, a lot of languages can be interpreted, which means that you have a program that goes through line by line and figures out what that line is supposed to do and what the next instruction is. The point is that no modern language operates that way. What they all go through is some sort of translation phase, converts it to some binary code, and then it runs the byte code. That runtime is what&#039;s called a virtual machine. But virtual machines are everywhere when we are talking about trying to run programs. Operating systems can be thought of as implementing a virtual machine and that virtual machine it implements is the process. Key difference between a virtual machine that makes processes and the vm that is typically in these language based virtual machines. The difference between these is getting smaller. Any idea what this difference is?&lt;br /&gt;
&lt;br /&gt;
Java based Virtual Machine - executes byte codes.&lt;br /&gt;
hardware can&#039;t interpret byte code &lt;br /&gt;
&lt;br /&gt;
What is the nature of the binary format that is being run in an operating system process? What format is that code? - machine code - it&#039;s the code that is understood by the processor. Machine code here, byte code here, what&#039;s the difference? The hardware can&#039;t interpret this, This language the processor needs another program to translate. Why can&#039;t the processor understand java byte code? It could, there are chips that run java byte code natively. What&#039;s worse, the machine code that your processor understand? it actually doesn&#039;t. Modern processors such as x86 or x86-64 these are the most common things for a pc, arm byte code, arm machine language, that sort of thing. This language is too annoying to use internally inside the microprocessor, it&#039;s not efficient, it was not designed to run very fast. It actually has a front end that takes that code and translates it to another byte code. There have been processor startups, where instead of having it done directly on the chip, they put something like a java virtual machine on the processor. Why am I saying this? The virtual and the real in computer science are often hard to tell apart. Virtual to one group could actually be real to another group. When you are coding in java / c, that is the language, that is real to you. That is the abstraction you are working i. but there are actually other levels below you. That generally is not the real level. when you are dealing with millions of transistors, there is a lot of abstraction. The process is the virtual machine that you run processes in. You take a file and loads it to disk. there is a little problem with this concept: Program on disk, is it a one to one mapping between programs on disk, and programs in memory? Not at all! Most programs on disk are not running at any given time. A given program on disk can be running in many different processes, you can have multiple instances of the same program running at the same time. Logically in an API you have to distinguish between the &lt;br /&gt;
&lt;br /&gt;
A thread is not a process.&lt;br /&gt;
&lt;br /&gt;
Process = thread(s) + address space&lt;br /&gt;
&lt;br /&gt;
having more than one cpu running around inside the address space. What happens if they try running the same code twice? Change the loop index in the loop from outside of the loop. The right way to think of this is don&#039;t do that. When you put more than one cpu inside the address space. For a long time operating system supported one cpu inside the address space. They supported lots of processes but only one cpu. That&#039;s kind of limiting, but why do you want your running programs to be sharing memory, the main reason you want them to share memory is to communicate. Shared memory has one advantage - it can be very fast. How do you make sure you don&#039;t overwrite each other&#039;s messages. &lt;br /&gt;
&lt;br /&gt;
Modern computation - big systems - you do almost everything you can - when you share memory you put some sort of API on top of it to control access. The only problem is - other than having potential overhead. &lt;br /&gt;
&lt;br /&gt;
an operating system is a set of mechanisms and policies to allow for time and space sharing of the processor. The processor is a limited resource. one program gets it for a while, another gets it for another while.  Space sharing means you have RAM, and memory. &lt;br /&gt;
&lt;br /&gt;
Virtual memory and physical memory&lt;br /&gt;
&lt;br /&gt;
[[File:Virtualmemory.png]]&lt;br /&gt;
&lt;br /&gt;
A program running with full privileges can still have a segmentation fault. The kernel can also have segmentation fault. When this happens the machine will crash hard. &lt;br /&gt;
&lt;br /&gt;
[http://slacksite.com/slackware/oops.html For Oops vs. Panics]&lt;br /&gt;
&lt;br /&gt;
Environment variables for X: &lt;br /&gt;
&lt;br /&gt;
new view - direct graphical output from another computer on the network - bad because of latency - new &lt;br /&gt;
&lt;br /&gt;
How to get around lag - run more of the code on the client instead of the server. Have the xclients have some code - transfer code to the xserver, to run on the server. Invisible website, downloads the page and it runs in your browser. Same thing different technology stack.&lt;br /&gt;
&lt;br /&gt;
Mechanisms vs. Policy - &lt;br /&gt;
&lt;br /&gt;
mechanisms - things to do things - the knobs that let us manipulate program state - should be maximally flexible so that they can implement whatever policies you want to do. &lt;br /&gt;
&lt;br /&gt;
policy are what you should do &lt;br /&gt;
&lt;br /&gt;
X Server &amp;lt;= mechanism&lt;br /&gt;
&lt;br /&gt;
window manager, toolkit &amp;lt;= policy&lt;br /&gt;
&lt;br /&gt;
Windows - two calls - create process () &amp;lt;--- many different parameters&lt;br /&gt;
&lt;br /&gt;
unix - fork() and execve(file, cmdline, env)&lt;/div&gt;</summary>
		<author><name>Afry</name></author>
	</entry>
</feed>