<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Rhooper</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Rhooper"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Rhooper"/>
	<updated>2026-05-02T09:13:17Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Early_Internet_%26_RPC&amp;diff=1829</id>
		<title>Early Internet &amp; RPC</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Early_Internet_%26_RPC&amp;diff=1829"/>
		<updated>2008-04-16T00:33:42Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Pros */ - fix remove -&amp;gt; remote&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Readings==&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[http://www.scs.carleton.ca/~soma/distos/2008-01-14/kahn1972-resource.pdf Robert E. Kahn, &amp;quot;Resource-Sharing Computer Communications Networks&amp;quot; (1972)]:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This is one of the early papers describing the rationale behind the&lt;br /&gt;
ARPANET (which later evolved into the Internet).  For more&lt;br /&gt;
background on the ARPANET, see [http://video.google.com/videoplay?docid=4989933629762859961 Computer Networks - The Heralds of Resource Sharing] (optional).&lt;br /&gt;
  &lt;br /&gt;
Note how both the ARPANET and and standard operating systems were&lt;br /&gt;
developed to facilitate resource sharing.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[http://www.scs.carleton.ca/~soma/distos/2008-01-14/nelson1981-rpc.pdf Bruce J. Nelson, &#039;&#039;Remote Procedure Call&#039;&#039; (1981)]:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You only need to read the thesis summary which starts at page 224 in&lt;br /&gt;
the PDF.  If you have time, however, I&#039;d suggest looking at the&lt;br /&gt;
rest, particularly the introduction and related work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[http://www.scs.carleton.ca/~soma/distos/2008-01-14/birrell1984-rpcimpl.pdf Birrell &amp;amp; Nelson, &amp;quot;Implementing Remote Procedure Calls&amp;quot; (1984)]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Compare the perspective of this RPC implementation description with&lt;br /&gt;
the more design-oriented focus of Nelson&#039;s thesis.&lt;br /&gt;
&lt;br /&gt;
==Questions to be discussed==&lt;br /&gt;
&lt;br /&gt;
# What did the (technical) world look like when this paper was published?&lt;br /&gt;
# What is the paper about?  What are the key ideas?&lt;br /&gt;
# What is the basic argument of the paper, and what sort of evidence is used to make the argument?&lt;br /&gt;
# To what extent do you &amp;quot;believe&amp;quot; the argument?  Why?&lt;br /&gt;
# What did the authors get right about the future? What did they miss about the future?&lt;br /&gt;
# What has been forgotten since this paper?&lt;br /&gt;
&lt;br /&gt;
==Presentations==&lt;br /&gt;
&lt;br /&gt;
==Debate==&lt;br /&gt;
Debate about the pros and cons of RPC. Topic: RPCs are the right foundation for the distributed applications (on today&#039;s Internet).&lt;br /&gt;
&lt;br /&gt;
===Pros===&lt;br /&gt;
# Easy to use&lt;br /&gt;
#* makes remote procedure calls look like local calls&lt;br /&gt;
#* local is easy&lt;br /&gt;
#* therefore remote is easy with RPC - good!&lt;br /&gt;
# (Counter to first con) Use standards, problem goes away&lt;br /&gt;
# Easy to debug&lt;br /&gt;
# Avoids complexity of protocol design&lt;br /&gt;
# RPC can be abstracted from the protocol design&lt;br /&gt;
# (Counter to fourth con) Use threads for interactivity. Manage complexity by doing things pairwise.&lt;br /&gt;
# (Counter to fifth con) - wrong!&lt;br /&gt;
# (Counter to seventh con) - Use standards (ORBs in CORBA, etc.) for authentication&lt;br /&gt;
# (Counter to eighth con) - But you specify the API&lt;br /&gt;
# (Counter to ninth con) - It&#039;s the same with other methods. But RPC makes it easier to do.&lt;br /&gt;
&lt;br /&gt;
===Cons===&lt;br /&gt;
# Language/implementation specific&lt;br /&gt;
# (Counter to third pro) Not easy to debug. example: NullPointerException thrown on server, containing line of erroneous code running on the server.&lt;br /&gt;
# Counter to fifth pro) But there is more overhead - slower&lt;br /&gt;
# Synchronous - wrong model for large, distributed applications, and also bad for users&lt;br /&gt;
# Limited scalability - hard to do well&lt;br /&gt;
# Limited control of communication details&lt;br /&gt;
# Authentication is opaque&lt;br /&gt;
# Larger attack surface area&lt;br /&gt;
# Poor match to listening (hard for server to communicate information back to client)&lt;br /&gt;
# Backwards compatibility is difficult&lt;br /&gt;
# RPC is not a &amp;quot;natural&amp;quot; interaction style - message passing is more similar to human communication&lt;br /&gt;
# Too much to set up for simple applications&lt;br /&gt;
# Less flexible&lt;br /&gt;
&lt;br /&gt;
===Anil&#039;s Comments===&lt;br /&gt;
&lt;br /&gt;
RPC makes the simple things very easy, but it doesn&#039;t help with the big challenges of distributed programming: robustness and security.  If those factors are of minimal importance (e.g. you are programming an isolated cluster running a very high-performance application), then this trade-off doesn&#039;t matter.  However, there exist other distributed programming paradigms that are easy to use but that still distinguish between accessing internal functions and communicating with external entities.&lt;br /&gt;
&lt;br /&gt;
If we can keep the developer aware of when communication occurs, then there is a chance that developer will remember to do the input validation and error checking at those points, which is what is needed to make robust and secure applications.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=MapReduce,_Globus,_BOINC&amp;diff=1821</id>
		<title>MapReduce, Globus, BOINC</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=MapReduce,_Globus,_BOINC&amp;diff=1821"/>
		<updated>2008-03-26T19:36:47Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Readings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Readings==&lt;br /&gt;
&lt;br /&gt;
[http://homeostasis.scs.carleton.ca/~soma/distos/2008-03-24/foster-grid.pdf Ian Foster and Carl Kesselman, &amp;quot;Computational Grids&amp;quot; (1998)]&lt;br /&gt;
&lt;br /&gt;
[http://homeostasis.scs.carleton.ca/~soma/distos/2008-03-24/foster-globus-intro.pdf Ian Foster, &amp;quot;Globus Toolkit Version 4: Software for Service-Oriented Systems&amp;quot; (2006)]&lt;br /&gt;
&lt;br /&gt;
[http://homeostasis.scs.carleton.ca/~soma/distos/2008-03-24/anderson-boinc.pdf David P. Anderson, &amp;quot;BOINC: A System for Public-Resource Computing and Storage&amp;quot; (2004)]&lt;br /&gt;
&lt;br /&gt;
[http://homeostasis.scs.carleton.ca/~soma/distos/2008-03-24/mapreduce-osdi04.pdf Jeffrey Dean and Sanjay Ghemawat, &amp;quot;MapReduce: Simpliﬁed Data Processing on Large Clusters&amp;quot; (2004)]&lt;br /&gt;
&lt;br /&gt;
Paper mentioned in class:&lt;br /&gt;
&lt;br /&gt;
[http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.pdf Krste Asanovíc, et al, &amp;quot;The Landscape of Parallel Computing Research: A View from Berkeley&amp;quot; (2006)]&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
===BOINC===&lt;br /&gt;
*Premise?  Local client on your machine downloads a &#039;workunit&#039;, churns the data, dumps the results and downloads a new &#039;workunit&#039;&lt;br /&gt;
*Why are we caring?&lt;br /&gt;
**Entertainment?&lt;br /&gt;
**How is this an OS paradigm?  What is it useful for?&lt;br /&gt;
***It isn&#039;t really an OS, just a method to have your mass computation done&lt;br /&gt;
***More of a distributed scheduler?&lt;br /&gt;
****Not even, central scheduler, but mass computation&lt;br /&gt;
***How many systems have we seen that have accomplished mass computation on millions of uncontrolled computers?&lt;br /&gt;
****ummm... none?&lt;br /&gt;
***As an OS?&lt;br /&gt;
****An OS is something that is created to run programs&lt;br /&gt;
****This is a special case allowing us to run specific programs (BUT IS IT AN OS?)&lt;br /&gt;
***Useful for &amp;quot;embarassingly parallel programs&amp;quot;&lt;br /&gt;
*Perfect for large scale simulation?&lt;br /&gt;
**But then you need LOTS of communication, and this system does not have interconnects&lt;br /&gt;
*The type of problems that we most care about tend not to be THAT parallel&lt;br /&gt;
&lt;br /&gt;
*So what would a distributed OS be for?&lt;br /&gt;
**Shared communication!&lt;br /&gt;
***But we don&#039;t have much in the way that works well.&lt;br /&gt;
*An OS typically provides a lot of services, together in one package&lt;br /&gt;
**We have been seeing that there are no complete packages, just pieces and parts.  Why?&lt;br /&gt;
***Computers are changing too fast?  Same *NIX OS, same TCP/IP stack... so more of the same, why no true solution?&lt;br /&gt;
***Communication is unreliable? Yes, but that is also nothing new&lt;br /&gt;
&lt;br /&gt;
*If people found that distributed file systems were successful, they would be in use all the time, but they aren&#039;t.  Reason? PERFORMANCE&lt;br /&gt;
&lt;br /&gt;
*Take away message?&lt;br /&gt;
*Can&#039;t handle communication - how do you abstract access to resources when driven through a network?&lt;br /&gt;
**As a result, we have many many specialized solutions for particular workloads.&lt;br /&gt;
*If you are willing to not have communication between nodes, you gain a HUGE amount of computation.&lt;br /&gt;
&lt;br /&gt;
*The most reliable systems are the one that forget communication.&lt;br /&gt;
**The more you system tolerates bad stuff with a network, the better is scales.&lt;br /&gt;
&lt;br /&gt;
*We dont have general cluster distributed OS.&lt;br /&gt;
&lt;br /&gt;
===MapReduce===&lt;br /&gt;
*The communication happens when you reduce the problem. &lt;br /&gt;
**MapReduce works because there is mapping and there is reducing.&lt;br /&gt;
***There is no side effects (enabling things).&lt;br /&gt;
*Why is it a good fit to a thousands of machines?&lt;br /&gt;
**They first had all these pieces, and if one of them does not replay, then they just do it over :)&lt;br /&gt;
***You create the algorithm to fit this model, create this pieces, you have a combining function.&lt;br /&gt;
****You have to have some back end that keeps track of who got work done. But you don&#039;t care if any machine fail in the middle of the computation.&lt;br /&gt;
*Compare MapReduce to POSIX&lt;br /&gt;
**The difference is in efficiency. MapReduce is an extension to POSIX.&lt;br /&gt;
***Distributed OSs trying to run the programs that run on different APIs. The systems that work, they are relaxed.&lt;br /&gt;
****Here is the model, loose compatibility by gaining scalability.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebOS,_PlanetLab,_Starfish&amp;diff=1804</id>
		<title>WebOS, PlanetLab, Starfish</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebOS,_PlanetLab,_Starfish&amp;diff=1804"/>
		<updated>2008-03-18T23:08:03Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Readings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Readings==&lt;br /&gt;
&lt;br /&gt;
[http://homeostasis.scs.carleton.ca/~soma/distos/2008-03-17/vahat-webos-hpdc98.pdf Amin Vahat et al., &amp;quot;WebOS: Operating System Services for Wide Area Applications&amp;quot; (1998)]&lt;br /&gt;
&lt;br /&gt;
[http://homeostasis.scs.carleton.ca/~soma/distos/2008-03-17/starfish.pdf Adnan Agbaria and Roy Friedman, &amp;quot;Starﬁsh: Fault-Tolerant Dynamic MPI Programs on Clusters of Workstations&amp;quot; (2003)]&lt;br /&gt;
&lt;br /&gt;
[http://homeostasis.scs.carleton.ca/~soma/distos/2008-03-17/peterson-planetlab-osdi06.pdf Larry Peterson et al., &amp;quot;Experiences Building PlanetLab&amp;quot; (2006)]&lt;br /&gt;
&lt;br /&gt;
[http://homeostasis.scs.carleton.ca/~soma/distos/2008-03-17/anderson-planetlab-learning.pdf Thomas Anderson and Timothy Roscoe, &amp;quot;Learning from PlanetLab&amp;quot; (2006)]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== WebOS ==&lt;br /&gt;
&lt;br /&gt;
=== Key features ===&lt;br /&gt;
* High Availability&lt;br /&gt;
* Lower Latency&lt;br /&gt;
* Fault Tolerance&lt;br /&gt;
&lt;br /&gt;
Consensus: WebOS isn&#039;t really an distributed OS&lt;br /&gt;
&lt;br /&gt;
=== Main components ===&lt;br /&gt;
* Smart Client&lt;br /&gt;
* WebFS&lt;br /&gt;
* Global naming scheme based on URLs&lt;br /&gt;
* Process control system&lt;br /&gt;
* CRISIS authentication/authorization system (Certificates with ACLs)&lt;br /&gt;
&lt;br /&gt;
=== Key ideas that were/were not not adopted from WebFS ===&lt;br /&gt;
&lt;br /&gt;
Adopted:&lt;br /&gt;
* General idea of wide area dynamic distribution -&amp;gt; Akamai (but primarily for static content)&lt;br /&gt;
* Global naming using URLs&lt;br /&gt;
&lt;br /&gt;
Not Adopted:&lt;br /&gt;
* CRISIS &lt;br /&gt;
* WebFS (Although WebDAV could be said to be related)&lt;br /&gt;
* Smart client (for web sites)&lt;br /&gt;
&lt;br /&gt;
=== What are the pros and cons of using smart clients to do load balancing? ===&lt;br /&gt;
&lt;br /&gt;
Pro:&lt;br /&gt;
* Distributes computation&lt;br /&gt;
* More flexible&lt;br /&gt;
&lt;br /&gt;
Con:&lt;br /&gt;
* Vulnerable to Denial of Service or other forms of attacks&lt;br /&gt;
* Extra network overhead to locate a service&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebOS,_PlanetLab,_Starfish&amp;diff=1803</id>
		<title>WebOS, PlanetLab, Starfish</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=WebOS,_PlanetLab,_Starfish&amp;diff=1803"/>
		<updated>2008-03-18T23:07:28Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Readings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Readings==&lt;br /&gt;
&lt;br /&gt;
[http://homeostasis.scs.carleton.ca/~soma/distos/2008-03-17/vahat-webos-hpdc98.pdf Amin Vahat et al., &amp;quot;WebOS: Operating System Services for Wide Area Applications&amp;quot; (1998)]&lt;br /&gt;
&lt;br /&gt;
[http://homeostasis.scs.carleton.ca/~soma/distos/2008-03-17/starfish.pdf Adnan Agbaria and Roy Friedman, &amp;quot;Starﬁsh: Fault-Tolerant Dynamic MPI Programs on Clusters of Workstations&amp;quot; (2003)]&lt;br /&gt;
&lt;br /&gt;
[http://homeostasis.scs.carleton.ca/~soma/distos/2008-03-17/peterson-planetlab-osdi06.pdf Larry Peterson et al., &amp;quot;Experiences Building PlanetLab&amp;quot; (2006)]&lt;br /&gt;
&lt;br /&gt;
[http://homeostasis.scs.carleton.ca/~soma/distos/2008-03-17/anderson-planetlab-learning.pdf Thomas Anderson and Timothy Roscoe, &amp;quot;Learning from PlanetLab&amp;quot; (2006)]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== WebOS ===&lt;br /&gt;
&lt;br /&gt;
==== Key features ====&lt;br /&gt;
* High Availability&lt;br /&gt;
* Lower Latency&lt;br /&gt;
* Fault Tolerance&lt;br /&gt;
&lt;br /&gt;
Consensus: WebOS isn&#039;t really an distributed OS&lt;br /&gt;
&lt;br /&gt;
==== Main components ====&lt;br /&gt;
* Smart Client&lt;br /&gt;
* WebFS&lt;br /&gt;
* Global naming scheme based on URLs&lt;br /&gt;
* Process control system&lt;br /&gt;
* CRISIS authentication/authorization system (Certificates with ACLs)&lt;br /&gt;
&lt;br /&gt;
==== Key ideas that were/were not not adopted from WebFS ====&lt;br /&gt;
&lt;br /&gt;
Adopted:&lt;br /&gt;
* General idea of wide area dynamic distribution -&amp;gt; Akamai (but primarily for static content)&lt;br /&gt;
* Global naming using URLs&lt;br /&gt;
&lt;br /&gt;
Not Adopted:&lt;br /&gt;
* CRISIS &lt;br /&gt;
* WebFS (Although WebDAV could be said to be related)&lt;br /&gt;
* Smart client (for web sites)&lt;br /&gt;
&lt;br /&gt;
==== What are the pros and cons of using smart clients to do load balancing? ====&lt;br /&gt;
&lt;br /&gt;
Pro:&lt;br /&gt;
* Distributes computation&lt;br /&gt;
* More flexible&lt;br /&gt;
&lt;br /&gt;
Con:&lt;br /&gt;
* Vulnerable to Denial of Service or other forms of attacks&lt;br /&gt;
* Extra network overhead to locate a service&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Bell_Labs&amp;diff=1783</id>
		<title>Bell Labs</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Bell_Labs&amp;diff=1783"/>
		<updated>2008-03-02T17:56:38Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Readings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Readings==&lt;br /&gt;
[http://homeostasis.scs.carleton.ca/~soma/distos/2008-03-03/unix.pdf Dennis M. Ritchie and Ken Thompson, &amp;quot;The UNIX Time-Sharing System&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
[http://doc.cat-v.org/plan_9/4th_edition/papers/9 Rob Pike et al., &amp;quot;Plan 9 from Bell Labs&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
[http://doc.cat-v.org/inferno/4th_edition/inferno_OS Sean Dorward et al., &amp;quot;The Inferno Operating System&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
[http://doc.cat-v.org/inferno/4th_edition/styx Rob Pike and Dennis M. Ritchie, &amp;quot;The Styx Architecture for Distributed Systems&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
==Questions on the Readings==&lt;br /&gt;
&lt;br /&gt;
Note that on Monday there will be a general group discussion on the readings - we won&#039;t split into groups.  These questions are just for you to consider, you don&#039;t need to address them in your write-ups.&lt;br /&gt;
&lt;br /&gt;
# How did the thinking of Rob Pike, Dennis Ritchie, and the rest of the OS researchers at Bell Labs change over time?  Specifically, what ideas did they throw away, and what new ideas came to replace them?&lt;br /&gt;
# What ideas remained consistent across UNIX, Plan 9, and Inferno?&lt;br /&gt;
# Why didn&#039;t Plan 9 or Inferno have the success of UNIX?&lt;br /&gt;
# What was the impact of Plan 9 and Inferno?&lt;br /&gt;
# How does the Styx architecture compare with the architectures of other distributed OSs we&#039;ve been studying?&lt;br /&gt;
# What does Styx ^not^ capture about the design of UNIX, Plan 9, or Inferno?&lt;br /&gt;
# Is the view of &amp;quot;everything is a file&amp;quot; still relevant today, or are other approaches (e.g. object-oriented interfaces) more suitable?  Why?&lt;br /&gt;
&lt;br /&gt;
==Project Topic Discussion==&lt;br /&gt;
&lt;br /&gt;
If you are planning on doing a class project, please come to class prepared to give an informal presentation on your chosen topic.  If you wish, you may use a formal electronic presentation; if you do so, please either email it to me by Wednesday morning or make it web accessible.  Note that you should only plan to speak for about five minutes - so please, no more than five slides!&lt;br /&gt;
&lt;br /&gt;
In your presentation, please address the following issues:&lt;br /&gt;
&lt;br /&gt;
# What is the topic of your project?&lt;br /&gt;
# If you are writing a research proposal, what is your idea, and why is it novel?&lt;br /&gt;
# Why did you choose it - why do you find it interesting?&lt;br /&gt;
# How is it related to distributed operating systems?&lt;br /&gt;
# (Very briefly) What related work have you identified?&lt;br /&gt;
&lt;br /&gt;
Everyone will receive a &amp;quot;group participation&amp;quot; grade on Wednesday.  Those presenting a project idea will be graded on the basis of their presentation.  (Note that slides are not necessary in order to get an A; all you need to do is coherently and concisely present your current thinking on your project topic.)&lt;br /&gt;
&lt;br /&gt;
Everyone else will be graded on the quality of the feedback given to the presenters.  I expect everyone not presenting to give at least one constructive comment during class - that will get you a B.  More participation, the better the grade - unless you start going overboard.  Remember, time will be limited!&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Bell_Labs&amp;diff=1782</id>
		<title>Bell Labs</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Bell_Labs&amp;diff=1782"/>
		<updated>2008-03-02T17:30:38Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Readings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Readings==&lt;br /&gt;
[http://homeostasis.scs.carleton.ca/~soma/distos/2008-03-03/unix.pdf Dennis M. Ritchie and Ken Thompson, &amp;quot;The UNIX Time-Sharing System&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
[http://doc.cat-v.org/plan_9/4th_edition/papers/9 Rob Pike et al., &amp;quot;Plan 9 from Bell Labs&amp;quot;] ([http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.25.8019&amp;amp;rep=rep1&amp;amp;type=pdf PDF])&lt;br /&gt;
&lt;br /&gt;
[http://doc.cat-v.org/inferno/4th_edition/inferno_OS Sean Dorward et al., &amp;quot;The Inferno Operating System&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
[http://doc.cat-v.org/inferno/4th_edition/styx Rob Pike and Dennis M. Ritchie, &amp;quot;The Styx Architecture for Distributed Systems&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
==Questions on the Readings==&lt;br /&gt;
&lt;br /&gt;
Note that on Monday there will be a general group discussion on the readings - we won&#039;t split into groups.  These questions are just for you to consider, you don&#039;t need to address them in your write-ups.&lt;br /&gt;
&lt;br /&gt;
# How did the thinking of Rob Pike, Dennis Ritchie, and the rest of the OS researchers at Bell Labs change over time?  Specifically, what ideas did they throw away, and what new ideas came to replace them?&lt;br /&gt;
# What ideas remained consistent across UNIX, Plan 9, and Inferno?&lt;br /&gt;
# Why didn&#039;t Plan 9 or Inferno have the success of UNIX?&lt;br /&gt;
# What was the impact of Plan 9 and Inferno?&lt;br /&gt;
# How does the Styx architecture compare with the architectures of other distributed OSs we&#039;ve been studying?&lt;br /&gt;
# What does Styx ^not^ capture about the design of UNIX, Plan 9, or Inferno?&lt;br /&gt;
# Is the view of &amp;quot;everything is a file&amp;quot; still relevant today, or are other approaches (e.g. object-oriented interfaces) more suitable?  Why?&lt;br /&gt;
&lt;br /&gt;
==Project Topic Discussion==&lt;br /&gt;
&lt;br /&gt;
If you are planning on doing a class project, please come to class prepared to give an informal presentation on your chosen topic.  If you wish, you may use a formal electronic presentation; if you do so, please either email it to me by Wednesday morning or make it web accessible.  Note that you should only plan to speak for about five minutes - so please, no more than five slides!&lt;br /&gt;
&lt;br /&gt;
In your presentation, please address the following issues:&lt;br /&gt;
&lt;br /&gt;
# What is the topic of your project?&lt;br /&gt;
# If you are writing a research proposal, what is your idea, and why is it novel?&lt;br /&gt;
# Why did you choose it - why do you find it interesting?&lt;br /&gt;
# How is it related to distributed operating systems?&lt;br /&gt;
# (Very briefly) What related work have you identified?&lt;br /&gt;
&lt;br /&gt;
Everyone will receive a &amp;quot;group participation&amp;quot; grade on Wednesday.  Those presenting a project idea will be graded on the basis of their presentation.  (Note that slides are not necessary in order to get an A; all you need to do is coherently and concisely present your current thinking on your project topic.)&lt;br /&gt;
&lt;br /&gt;
Everyone else will be graded on the quality of the feedback given to the presenters.  I expect everyone not presenting to give at least one constructive comment during class - that will get you a B.  More participation, the better the grade - unless you start going overboard.  Remember, time will be limited!&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Proposal_Meeting_Schedule&amp;diff=1542</id>
		<title>Proposal Meeting Schedule</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Proposal_Meeting_Schedule&amp;diff=1542"/>
		<updated>2007-10-26T04:05:13Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Wednesday, Nov. 7th */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Tuesday, Oct. 30th==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| 10:00 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:10 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:20 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:30 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:40 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:50 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:30 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:40 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:50 PM&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 3:00 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 3:10 PM&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 3:20 PM&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Wednesday, Oct. 31st==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| 10:30 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:40 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:50 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:00 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:10 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:20 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:30 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:40 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:50 AM&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 12:00 PM&lt;br /&gt;
| Neil Dickson&lt;br /&gt;
|-&lt;br /&gt;
| 12:10 PM&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 12:20 PM&lt;br /&gt;
| Richard Gould&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Thursday, Nov. 1st==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| 1:30 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 1:40 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 1:50 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:00 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:10 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:20 PM&lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Tuesday, Nov. 6th==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| 10:00 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:10 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:20 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:30 AM&lt;br /&gt;
| David Tremayne&lt;br /&gt;
|-&lt;br /&gt;
| 10:40 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:50 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:30 PM&lt;br /&gt;
| Maria Krol&lt;br /&gt;
|-&lt;br /&gt;
| 2:40 PM&lt;br /&gt;
| Adam Becevello&lt;br /&gt;
|-&lt;br /&gt;
| 2:50 PM&lt;br /&gt;
| Jeff Snell&lt;br /&gt;
|-&lt;br /&gt;
| 3:00 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 3:10 PM&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 3:20 PM&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Wednesday, Nov. 7th==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| 10:30 AM&lt;br /&gt;
| Kenneth Chan&lt;br /&gt;
|-&lt;br /&gt;
| 10:40 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 10:50 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:00 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:10 AM&lt;br /&gt;
| Adam McNamara&lt;br /&gt;
|-&lt;br /&gt;
| 11:20 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:30 AM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 11:40 AM&lt;br /&gt;
| Geoffrey Johnson&lt;br /&gt;
|-&lt;br /&gt;
| 11:50 AM&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 12:00 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 12:10 PM&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 12:20 PM&lt;br /&gt;
| Roy Hooper&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Thursday, Nov. 8th==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| 1:30 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 1:40 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 1:50 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:00 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:10 PM&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| 2:20 PM&lt;br /&gt;
| &lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=High-level_Synchronization_and_IPC&amp;diff=1519</id>
		<title>High-level Synchronization and IPC</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=High-level_Synchronization_and_IPC&amp;diff=1519"/>
		<updated>2007-10-17T17:14:16Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Limitations of Semaphores */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Dinning Philosophers Problem==&lt;br /&gt;
&lt;br /&gt;
[[Image:DiningPhilosophers.jpg]]&lt;br /&gt;
&lt;br /&gt;
* Thought experiment used to understand synchronization primitives&lt;br /&gt;
* Five philosophers are dining at a frugal Chinese restaurant where they are only provided with one chopstick each. They need two chopsticks to pick up the noodles in order to eat.&lt;br /&gt;
** Whenever they are hungry they pick up two chopsticks to eat.&lt;br /&gt;
** When they are done eating they put down the chopsticks.&lt;br /&gt;
** The philosophers do not talk to each other about eating, only high-minded ideas.&lt;br /&gt;
* Have to define a strategy to make sure that no philosopher starves to death.&lt;br /&gt;
* We can think of each chopstick as a semaphore.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
* &#039;&#039;&#039;Starvation&#039;&#039;&#039;&lt;br /&gt;
** One (or more) philosopher is never able to get two chopsticks and dies.&lt;br /&gt;
* &#039;&#039;&#039;Deadlock&#039;&#039;&#039;&lt;br /&gt;
** All philosophers have one chopstick and are waiting for another philosopher to put one down and they all starve to death.&lt;br /&gt;
&lt;br /&gt;
===Possible Solutions===&lt;br /&gt;
* Could use a scheme involving timeouts, but we want a &#039;&#039;PERFECT&#039;&#039; solution&lt;br /&gt;
* Could use synchronization; philosophers either grab one chopstick or none, this illustrates &#039;&#039;&#039;AND synchronization&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
How could AND synchronization be implemented?&lt;br /&gt;
* Could turn multiple semaphores into one by abstracting the semaphores into 1 class.&lt;br /&gt;
* In general this is done by encapsulating the resources and using another process/thread/program to hand them out&lt;br /&gt;
&lt;br /&gt;
==Limitations of Semaphores==&lt;br /&gt;
&lt;br /&gt;
Semaphores are hard to use...&lt;br /&gt;
** If the programmer forgets to grab the semaphore this can lead to corruption.&lt;br /&gt;
** If the programmer never releases the semaphore this can lead to the program locking.&lt;br /&gt;
&lt;br /&gt;
The underlying problem is that the down and up need to be matched, with non-standard exits this can be difficult.&lt;br /&gt;
&lt;br /&gt;
But in C it&#039;s all you&#039;ve got...&lt;br /&gt;
* Solution: write a program to check the code (eg: Stanford Checker)&lt;br /&gt;
** Problem: not reliable, can produce many false positives&lt;br /&gt;
&lt;br /&gt;
How about building synchronization into the language as a base level construct?&lt;br /&gt;
* Java&#039;s &#039;&#039;synchronized&#039;&#039;&lt;br /&gt;
** Hides semaphores from developer&lt;br /&gt;
** Developer still has to decide where to synchronize&lt;br /&gt;
** Don&#039;t want to  synchronize all your code, this leads to single threaded behaviour&lt;br /&gt;
&lt;br /&gt;
General  idea: Monitor Object&lt;br /&gt;
* An object instance where only one thread can be executing certain pieces of code at once.&lt;br /&gt;
* Provides exclusive access to variables&lt;br /&gt;
&lt;br /&gt;
From review lecture Oct 17th:&lt;br /&gt;
&lt;br /&gt;
A monitor is a language construct for doing mutual exclusion.  In Java this is known as synchronized.&lt;br /&gt;
&lt;br /&gt;
==Events and Message Passing==&lt;br /&gt;
&lt;br /&gt;
[[image:Structs.jpg]]&lt;br /&gt;
&lt;br /&gt;
===Message Passing===&lt;br /&gt;
* Uses buffers&lt;br /&gt;
* Message passing is used in large contexts&lt;br /&gt;
&lt;br /&gt;
===Events===&lt;br /&gt;
* Message queue&lt;br /&gt;
* When do they occur?&lt;br /&gt;
** User does something&lt;br /&gt;
** Device does something&lt;br /&gt;
* Synchronization can be based around waiting for an event to occur.&lt;br /&gt;
&lt;br /&gt;
When an event occurs:&lt;br /&gt;
* It has to be stored (in a buffer)&lt;br /&gt;
* It might not be processed right away&lt;br /&gt;
** What if the event is no longer relevant?&lt;br /&gt;
*** Could implement events with timeouts&lt;br /&gt;
** What if no one&#039;s listening for the event?&lt;br /&gt;
*** Could drop the event&lt;br /&gt;
** The challenge is determining which events should be dropped and which should be queued.&lt;br /&gt;
*** In the case of a file open if the event is dropped there will be a leak&lt;br /&gt;
*** In the case of mouse movement when no application is listening we want to drop it so that if an application starts listening to mouses events the mouse doesn&#039;t go crazy&lt;br /&gt;
&lt;br /&gt;
==Message passing in a WAN==&lt;br /&gt;
&lt;br /&gt;
Message passing in a WAN is unreliable. Packets can be dropped.&lt;br /&gt;
&lt;br /&gt;
In some cases it is acceptable to lose the data&lt;br /&gt;
* Streaming video&lt;br /&gt;
* VOIP&lt;br /&gt;
* Cell phones&lt;br /&gt;
&lt;br /&gt;
In others we need a reliable transmission&lt;br /&gt;
* Downloading content&lt;br /&gt;
&lt;br /&gt;
===TCP/IP===&lt;br /&gt;
* A mechanism to resend dropped packets&lt;br /&gt;
* If we build on top of TCP/IP we can assume reliable transmission&lt;br /&gt;
&lt;br /&gt;
===NAT (Network Address Translation)===&lt;br /&gt;
* Used in local networks, to the outside world the whole network appears as a single address, NAT devices remember outgoing requests to route the response to the correct party.&lt;br /&gt;
* Firewalls will automatically drop messages that do not have a corresponding outgoing request&lt;br /&gt;
** Firewalls will often only let certain types of traffic through, often only TCP/IP&lt;br /&gt;
*** Application developers may want to use UDP but may be forced to use TCP/IP to get through the firewall&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Main_Page&amp;diff=1511</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Main_Page&amp;diff=1511"/>
		<updated>2007-10-15T17:16:22Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Lectures and Deadlines */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the Carleton University COMP 3000: Operating Systems (Fall 2007) wiki.&lt;br /&gt;
&lt;br /&gt;
==Course Outline==&lt;br /&gt;
&lt;br /&gt;
The course outline can be found [http://www.scs.carleton.ca/%7Ecourses/course_outline.php?number=COMP%203000A&amp;amp;amp;term=Fall&amp;amp;amp;year=2007 here].  A backup copy is [http://www.scs.carleton.ca/~soma/os-2007f/course_outline_printable.php.html here].&lt;br /&gt;
&lt;br /&gt;
==Running Linux at Home==&lt;br /&gt;
&lt;br /&gt;
To give you an opportunity to become more familiar with Linux and UNIX, consider running a Linux distribution on your own machine if you can.  I suggest looking at [http://www.ubuntu.com Ubuntu] or [http://www.debian.org Debian] (the distribution used in the lab).&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t want to dual boot from Windows/MacOS X or you just don&#039;t want to worry about repartitioning, you can run Linux in a virtual machine.&lt;br /&gt;
See [[Running Linux in a Virtual Machine]] for more information.&lt;br /&gt;
&lt;br /&gt;
==Lectures and Deadlines==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table style=&amp;quot;width: 100%;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;4&amp;quot; cellspacing=&amp;quot;0&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;tr valign=&amp;quot;top&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Date&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Due/In Class&lt;br /&gt;
    &amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Topics&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
    &amp;lt;th&amp;gt;&lt;br /&gt;
    &amp;lt;p align=&amp;quot;left&amp;quot;&amp;gt;Readings&amp;lt;/p&amp;gt;&lt;br /&gt;
    &amp;lt;/th&amp;gt;&lt;br /&gt;
  &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;Sept. 10th&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt; &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;[[Class Outline]]&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;p&amp;gt;&lt;br /&gt;
      &amp;lt;/p&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Sept. 12th&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;#1: [[Introduction]]&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Chap. 1&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Sept. 17th&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;#2: [[Using the Operating System]], [http://www.scs.carleton.ca/~soma/os-2007f/labs/comp3000-2007F-lab1.pdf Lab 1 introduction]&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Chap. 2&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Sept. 19th&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;#3: [[Operating System Organization]] (Glenn)&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Chap. 3&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Sept. 24th&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;#4: [[Computer Organization]]&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Chap. 4&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Sept. 26th&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;#5: [[Device Management]]&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Chap. 5&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Oct. 1st&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;[http://www.scs.carleton.ca/~soma/os-2007f/labs/comp3000-2007F-lab1-solution.pdf Lab 1]&amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;#6: [[Implementing Processes, Threads, and Resources]], [http://www.scs.carleton.ca/~soma/os-2007f/labs/lab2 Lab 2 introduction]&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Chap. 6&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Oct. 3rd&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;#7: [[Basic Synchronization Principles]]&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Chap. 8&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Oct. 8th&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Thanksgiving&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;[http://www.cryptonomicon.com/beginning.html In the Beginning was the Command Line].&lt;br /&gt;
          Prettier version [http://www.csn.ul.ie/%7Ecaolan/Texts/stephenson.html here] (optional)&amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Oct. 10th&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;#8: [[High-level Synchronization and IPC]]&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Chap. 9&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Oct. 15th&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Lab 2 (10pm on WebCT)&lt;br /&gt;
      &lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;[[Test 1 Review]]&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Oct. 17th&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Test 1&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Oct. 22nd&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;#9: [[Scheduling]], [[Lab 3 introduction]]&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Chap. 7 &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Oct. 24th&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;#10: [[Deadlock]] &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Chap. 10 &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Oct. 29th&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Paper Outline &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;#11: [[Memory Management]]&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Chap. 11&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Oct. 31st&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;#12: [[Virtual Memory]]&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Chap. 12&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Nov. 5th&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Lab 3&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;#13: [[File Management]], [[Lab 4 introduction]]&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Chap. 13&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Nov. 7th&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;#14: [[Protection and Security]] &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Chap. 14&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Nov. 12th&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;#15: [[Networks]]&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Chap. 15&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Nov. 14th&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt; &lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;#16: [[Remote Files]] (Glenn)&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Chap. 16&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Nov. 19th&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;#17: [[Networks 2]]&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Nov. 21nd&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;#18: [[Security 2]]&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Nov. 26th&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Lab 4&lt;br /&gt;
      &lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;[[Test 2 Review]]&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Nov. 28th&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Test 2&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt; &lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Chap. 17 &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Dec. 3rd&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;Paper Final Draft&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;#21: [[The Future of Operating Systems]]&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
      &amp;lt;td&amp;gt;&lt;br /&gt;
      &amp;lt;/td&amp;gt;&lt;br /&gt;
    &amp;lt;/tr&amp;gt; &lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Running_Linux_in_a_Virtual_Machine&amp;diff=1489</id>
		<title>Running Linux in a Virtual Machine</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Running_Linux_in_a_Virtual_Machine&amp;diff=1489"/>
		<updated>2007-10-03T22:13:06Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Configuring Debian */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are two things you need to run Linux in a virtual machine: a virtual machine application and an image.&lt;br /&gt;
&lt;br /&gt;
==Choosing a virtual machine application==&lt;br /&gt;
&lt;br /&gt;
If you are running Windows, two popular options for running Linux in a virtual machine are:&lt;br /&gt;
&lt;br /&gt;
* [http://www.vmware.com/products/player/ VMWare Player] (commercial but free)&lt;br /&gt;
* [http://fabrice.bellard.free.fr/qemu/ QEMU] (open source)&lt;br /&gt;
* [http://www.virtualbox.org/ VirtualBox] (commercial w/ free trial and open source)&lt;br /&gt;
&lt;br /&gt;
If you are running OSX, two popular options for running Linux in a virtual machine are:&lt;br /&gt;
&lt;br /&gt;
* [http://www.parallels.com/ Parallels] (commercial - trial available)&lt;br /&gt;
* [http://www.vmware.com/products/fusion/ VMWare Fusion] (commercial)&lt;br /&gt;
&lt;br /&gt;
==Choosing a virtual machine image==&lt;br /&gt;
&lt;br /&gt;
You can do a fresh install of virtually any Linux distribution in most modern virtual machine environments, including [http://www.debian.org Debian] and [http://www.ubuntu.com Ubuntu].  However, it is easier to start with a prebuilt machine image.  Such images are often referred to as [http://en.wikipedia.org/wiki/Virtual_appliance virtual appliances].&lt;br /&gt;
&lt;br /&gt;
There are a variety of images available.  Please update the list below with your experiences running these virtual machines:&lt;br /&gt;
&lt;br /&gt;
* [http://www.visoracle.com/download/debian/ Debian 4.0 (Etch) image] for VMWare Player &#039;&#039;&#039;UNTESTED&#039;&#039;&#039;&lt;br /&gt;
* [http://www.vmware.com/vmtn/appliances/directory/954 Ubuntu 7.04 (Feisty Fawn) image] for VMWare Player &#039;&#039;&#039;UNTESTED&#039;&#039;&#039;&lt;br /&gt;
* [http://isv-image.ubuntu.com/vmware/ Ubuntu (Gnome) and kubuntu (KDE)] official 7.04, 6.10, 6.06 images for VMWare Player &lt;br /&gt;
**&#039;&#039;&#039;Kubuntu-7.04-desktop-amd64.zip Tested OK&#039;&#039;&#039;&lt;br /&gt;
**&#039;&#039;&#039;Ubuntu-7.04-desktop-amd64.zip Tested OK&#039;&#039;&#039;&lt;br /&gt;
* Others?&lt;br /&gt;
&lt;br /&gt;
Note: The amd64 images work with Intel 64 Bit CPU&#039;s (ie. Core 2 Duo).&lt;br /&gt;
&lt;br /&gt;
When connected to the university network, NAT will allow internet access for the virtual machine. Ensure that any firewalls are configured to trust the VMWare Virtual Ethernet Adaptor.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Configuring Debian ==&lt;br /&gt;
&lt;br /&gt;
In order to do the first lab, you will likely want to install manpages-dev (as root):&lt;br /&gt;
 apt-get install manpages-dev&lt;br /&gt;
&lt;br /&gt;
In order to do the second lab, you will need the kernel sources and gcc.  &lt;br /&gt;
  apt-get install linux-source-2.6.18 &lt;br /&gt;
&lt;br /&gt;
Remove or comment out the line:&lt;br /&gt;
 deb cdrom:[Debian GNU/Linux 4.0 r1 _Etch_ - Official i386 CD Binary-1 20070819-11:52]/ etch contrib main&lt;br /&gt;
&lt;br /&gt;
If you haven&#039;t modified /etc/apt/sources.list to remove the cdrom source, you will may want to do so first to install gcc and other packages.&lt;br /&gt;
  apt-get install gcc &lt;br /&gt;
  apt-get install libc-dev&lt;br /&gt;
&lt;br /&gt;
You&#039;ll also likely need to install the linux headers for your kernel.  If you&#039;re running the latest etch, the linux-headers-2.6.18-5 or linux-headers-2.6.18-5-686 should be what you want.&lt;br /&gt;
 apt-get install linux-headers-2.6.18-5&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Running_Linux_in_a_Virtual_Machine&amp;diff=1488</id>
		<title>Running Linux in a Virtual Machine</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Running_Linux_in_a_Virtual_Machine&amp;diff=1488"/>
		<updated>2007-10-03T22:11:29Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Configuring Debian */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are two things you need to run Linux in a virtual machine: a virtual machine application and an image.&lt;br /&gt;
&lt;br /&gt;
==Choosing a virtual machine application==&lt;br /&gt;
&lt;br /&gt;
If you are running Windows, two popular options for running Linux in a virtual machine are:&lt;br /&gt;
&lt;br /&gt;
* [http://www.vmware.com/products/player/ VMWare Player] (commercial but free)&lt;br /&gt;
* [http://fabrice.bellard.free.fr/qemu/ QEMU] (open source)&lt;br /&gt;
* [http://www.virtualbox.org/ VirtualBox] (commercial w/ free trial and open source)&lt;br /&gt;
&lt;br /&gt;
If you are running OSX, two popular options for running Linux in a virtual machine are:&lt;br /&gt;
&lt;br /&gt;
* [http://www.parallels.com/ Parallels] (commercial - trial available)&lt;br /&gt;
* [http://www.vmware.com/products/fusion/ VMWare Fusion] (commercial)&lt;br /&gt;
&lt;br /&gt;
==Choosing a virtual machine image==&lt;br /&gt;
&lt;br /&gt;
You can do a fresh install of virtually any Linux distribution in most modern virtual machine environments, including [http://www.debian.org Debian] and [http://www.ubuntu.com Ubuntu].  However, it is easier to start with a prebuilt machine image.  Such images are often referred to as [http://en.wikipedia.org/wiki/Virtual_appliance virtual appliances].&lt;br /&gt;
&lt;br /&gt;
There are a variety of images available.  Please update the list below with your experiences running these virtual machines:&lt;br /&gt;
&lt;br /&gt;
* [http://www.visoracle.com/download/debian/ Debian 4.0 (Etch) image] for VMWare Player &#039;&#039;&#039;UNTESTED&#039;&#039;&#039;&lt;br /&gt;
* [http://www.vmware.com/vmtn/appliances/directory/954 Ubuntu 7.04 (Feisty Fawn) image] for VMWare Player &#039;&#039;&#039;UNTESTED&#039;&#039;&#039;&lt;br /&gt;
* [http://isv-image.ubuntu.com/vmware/ Ubuntu (Gnome) and kubuntu (KDE)] official 7.04, 6.10, 6.06 images for VMWare Player &lt;br /&gt;
**&#039;&#039;&#039;Kubuntu-7.04-desktop-amd64.zip Tested OK&#039;&#039;&#039;&lt;br /&gt;
**&#039;&#039;&#039;Ubuntu-7.04-desktop-amd64.zip Tested OK&#039;&#039;&#039;&lt;br /&gt;
* Others?&lt;br /&gt;
&lt;br /&gt;
Note: The amd64 images work with Intel 64 Bit CPU&#039;s (ie. Core 2 Duo).&lt;br /&gt;
&lt;br /&gt;
When connected to the university network, NAT will allow internet access for the virtual machine. Ensure that any firewalls are configured to trust the VMWare Virtual Ethernet Adaptor.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Configuring Debian ==&lt;br /&gt;
&lt;br /&gt;
In order to do the first lab, you will likely want to install manpages-dev (as root):&lt;br /&gt;
 apt-get install manpages-dev&lt;br /&gt;
&lt;br /&gt;
In order to do the second lab, you will need the kernel sources and gcc.  &lt;br /&gt;
  apt-get install linux-source-2.6.18 &lt;br /&gt;
&lt;br /&gt;
Remove or comment out the line:&lt;br /&gt;
 deb cdrom:[Debian GNU/Linux 4.0 r1 _Etch_ - Official i386 CD Binary-1 20070819-11:52]/ etch contrib main&lt;br /&gt;
&lt;br /&gt;
If you haven&#039;t modified /etc/apt/sources.list to remove the cdrom source, you will may want to do so first to install gcc.&lt;br /&gt;
  apt-get install gcc&lt;br /&gt;
&lt;br /&gt;
You&#039;ll also likely need to install the linux headers for your kernel.  If you&#039;re running the latest etch, the linux-headers-2.6.18-5 or linux-headers-2.6.18-5-686 should be what you want.&lt;br /&gt;
 apt-get install linux-headers-2.6.18-5&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Running_Linux_in_a_Virtual_Machine&amp;diff=1487</id>
		<title>Running Linux in a Virtual Machine</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Running_Linux_in_a_Virtual_Machine&amp;diff=1487"/>
		<updated>2007-10-03T22:04:34Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Configuring Debian */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are two things you need to run Linux in a virtual machine: a virtual machine application and an image.&lt;br /&gt;
&lt;br /&gt;
==Choosing a virtual machine application==&lt;br /&gt;
&lt;br /&gt;
If you are running Windows, two popular options for running Linux in a virtual machine are:&lt;br /&gt;
&lt;br /&gt;
* [http://www.vmware.com/products/player/ VMWare Player] (commercial but free)&lt;br /&gt;
* [http://fabrice.bellard.free.fr/qemu/ QEMU] (open source)&lt;br /&gt;
* [http://www.virtualbox.org/ VirtualBox] (commercial w/ free trial and open source)&lt;br /&gt;
&lt;br /&gt;
If you are running OSX, two popular options for running Linux in a virtual machine are:&lt;br /&gt;
&lt;br /&gt;
* [http://www.parallels.com/ Parallels] (commercial - trial available)&lt;br /&gt;
* [http://www.vmware.com/products/fusion/ VMWare Fusion] (commercial)&lt;br /&gt;
&lt;br /&gt;
==Choosing a virtual machine image==&lt;br /&gt;
&lt;br /&gt;
You can do a fresh install of virtually any Linux distribution in most modern virtual machine environments, including [http://www.debian.org Debian] and [http://www.ubuntu.com Ubuntu].  However, it is easier to start with a prebuilt machine image.  Such images are often referred to as [http://en.wikipedia.org/wiki/Virtual_appliance virtual appliances].&lt;br /&gt;
&lt;br /&gt;
There are a variety of images available.  Please update the list below with your experiences running these virtual machines:&lt;br /&gt;
&lt;br /&gt;
* [http://www.visoracle.com/download/debian/ Debian 4.0 (Etch) image] for VMWare Player &#039;&#039;&#039;UNTESTED&#039;&#039;&#039;&lt;br /&gt;
* [http://www.vmware.com/vmtn/appliances/directory/954 Ubuntu 7.04 (Feisty Fawn) image] for VMWare Player &#039;&#039;&#039;UNTESTED&#039;&#039;&#039;&lt;br /&gt;
* [http://isv-image.ubuntu.com/vmware/ Ubuntu (Gnome) and kubuntu (KDE)] official 7.04, 6.10, 6.06 images for VMWare Player &lt;br /&gt;
**&#039;&#039;&#039;Kubuntu-7.04-desktop-amd64.zip Tested OK&#039;&#039;&#039;&lt;br /&gt;
**&#039;&#039;&#039;Ubuntu-7.04-desktop-amd64.zip Tested OK&#039;&#039;&#039;&lt;br /&gt;
* Others?&lt;br /&gt;
&lt;br /&gt;
Note: The amd64 images work with Intel 64 Bit CPU&#039;s (ie. Core 2 Duo).&lt;br /&gt;
&lt;br /&gt;
When connected to the university network, NAT will allow internet access for the virtual machine. Ensure that any firewalls are configured to trust the VMWare Virtual Ethernet Adaptor.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Configuring Debian ==&lt;br /&gt;
&lt;br /&gt;
In order to do the first lab, you will likely want to install manpages-dev (as root):&lt;br /&gt;
 apt-get install manpages-dev&lt;br /&gt;
&lt;br /&gt;
In order to do the second lab, you will need the kernel sources and gcc.  &lt;br /&gt;
  apt-get install linux-source-2.6.18 &lt;br /&gt;
&lt;br /&gt;
Remove or comment out the line:&lt;br /&gt;
 deb cdrom:[Debian GNU/Linux 4.0 r1 _Etch_ - Official i386 CD Binary-1 20070819-11:52]/ etch contrib main&lt;br /&gt;
&lt;br /&gt;
If you haven&#039;t modified /etc/apt/sources.list to remove the cdrom source, you will may want to do so first to install gcc.&lt;br /&gt;
  apt-get install gcc&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Running_Linux_in_a_Virtual_Machine&amp;diff=1486</id>
		<title>Running Linux in a Virtual Machine</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Running_Linux_in_a_Virtual_Machine&amp;diff=1486"/>
		<updated>2007-10-03T22:04:17Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Configuring Debian */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are two things you need to run Linux in a virtual machine: a virtual machine application and an image.&lt;br /&gt;
&lt;br /&gt;
==Choosing a virtual machine application==&lt;br /&gt;
&lt;br /&gt;
If you are running Windows, two popular options for running Linux in a virtual machine are:&lt;br /&gt;
&lt;br /&gt;
* [http://www.vmware.com/products/player/ VMWare Player] (commercial but free)&lt;br /&gt;
* [http://fabrice.bellard.free.fr/qemu/ QEMU] (open source)&lt;br /&gt;
* [http://www.virtualbox.org/ VirtualBox] (commercial w/ free trial and open source)&lt;br /&gt;
&lt;br /&gt;
If you are running OSX, two popular options for running Linux in a virtual machine are:&lt;br /&gt;
&lt;br /&gt;
* [http://www.parallels.com/ Parallels] (commercial - trial available)&lt;br /&gt;
* [http://www.vmware.com/products/fusion/ VMWare Fusion] (commercial)&lt;br /&gt;
&lt;br /&gt;
==Choosing a virtual machine image==&lt;br /&gt;
&lt;br /&gt;
You can do a fresh install of virtually any Linux distribution in most modern virtual machine environments, including [http://www.debian.org Debian] and [http://www.ubuntu.com Ubuntu].  However, it is easier to start with a prebuilt machine image.  Such images are often referred to as [http://en.wikipedia.org/wiki/Virtual_appliance virtual appliances].&lt;br /&gt;
&lt;br /&gt;
There are a variety of images available.  Please update the list below with your experiences running these virtual machines:&lt;br /&gt;
&lt;br /&gt;
* [http://www.visoracle.com/download/debian/ Debian 4.0 (Etch) image] for VMWare Player &#039;&#039;&#039;UNTESTED&#039;&#039;&#039;&lt;br /&gt;
* [http://www.vmware.com/vmtn/appliances/directory/954 Ubuntu 7.04 (Feisty Fawn) image] for VMWare Player &#039;&#039;&#039;UNTESTED&#039;&#039;&#039;&lt;br /&gt;
* [http://isv-image.ubuntu.com/vmware/ Ubuntu (Gnome) and kubuntu (KDE)] official 7.04, 6.10, 6.06 images for VMWare Player &lt;br /&gt;
**&#039;&#039;&#039;Kubuntu-7.04-desktop-amd64.zip Tested OK&#039;&#039;&#039;&lt;br /&gt;
**&#039;&#039;&#039;Ubuntu-7.04-desktop-amd64.zip Tested OK&#039;&#039;&#039;&lt;br /&gt;
* Others?&lt;br /&gt;
&lt;br /&gt;
Note: The amd64 images work with Intel 64 Bit CPU&#039;s (ie. Core 2 Duo).&lt;br /&gt;
&lt;br /&gt;
When connected to the university network, NAT will allow internet access for the virtual machine. Ensure that any firewalls are configured to trust the VMWare Virtual Ethernet Adaptor.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Configuring Debian ==&lt;br /&gt;
&lt;br /&gt;
In order to do the first lab, you will likely want to install manpages-dev (as root):&lt;br /&gt;
 apt-get install manpages-dev&lt;br /&gt;
&lt;br /&gt;
In order to do the second lab, you will need the kernel sources and gcc.  &lt;br /&gt;
  apt-get install linux-source-2.6.18 &lt;br /&gt;
&lt;br /&gt;
If you haven&#039;t modified /etc/apt/sources.list to remove the cdrom source, you will may want to do so first to install gcc.&lt;br /&gt;
  apt-get install gcc&lt;br /&gt;
Remove or comment out the line:&lt;br /&gt;
 deb cdrom:[Debian GNU/Linux 4.0 r1 _Etch_ - Official i386 CD Binary-1 20070819-11:52]/ etch contrib main&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Running_Linux_in_a_Virtual_Machine&amp;diff=1485</id>
		<title>Running Linux in a Virtual Machine</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Running_Linux_in_a_Virtual_Machine&amp;diff=1485"/>
		<updated>2007-10-03T21:59:38Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Configuring Debian */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are two things you need to run Linux in a virtual machine: a virtual machine application and an image.&lt;br /&gt;
&lt;br /&gt;
==Choosing a virtual machine application==&lt;br /&gt;
&lt;br /&gt;
If you are running Windows, two popular options for running Linux in a virtual machine are:&lt;br /&gt;
&lt;br /&gt;
* [http://www.vmware.com/products/player/ VMWare Player] (commercial but free)&lt;br /&gt;
* [http://fabrice.bellard.free.fr/qemu/ QEMU] (open source)&lt;br /&gt;
* [http://www.virtualbox.org/ VirtualBox] (commercial w/ free trial and open source)&lt;br /&gt;
&lt;br /&gt;
If you are running OSX, two popular options for running Linux in a virtual machine are:&lt;br /&gt;
&lt;br /&gt;
* [http://www.parallels.com/ Parallels] (commercial - trial available)&lt;br /&gt;
* [http://www.vmware.com/products/fusion/ VMWare Fusion] (commercial)&lt;br /&gt;
&lt;br /&gt;
==Choosing a virtual machine image==&lt;br /&gt;
&lt;br /&gt;
You can do a fresh install of virtually any Linux distribution in most modern virtual machine environments, including [http://www.debian.org Debian] and [http://www.ubuntu.com Ubuntu].  However, it is easier to start with a prebuilt machine image.  Such images are often referred to as [http://en.wikipedia.org/wiki/Virtual_appliance virtual appliances].&lt;br /&gt;
&lt;br /&gt;
There are a variety of images available.  Please update the list below with your experiences running these virtual machines:&lt;br /&gt;
&lt;br /&gt;
* [http://www.visoracle.com/download/debian/ Debian 4.0 (Etch) image] for VMWare Player &#039;&#039;&#039;UNTESTED&#039;&#039;&#039;&lt;br /&gt;
* [http://www.vmware.com/vmtn/appliances/directory/954 Ubuntu 7.04 (Feisty Fawn) image] for VMWare Player &#039;&#039;&#039;UNTESTED&#039;&#039;&#039;&lt;br /&gt;
* [http://isv-image.ubuntu.com/vmware/ Ubuntu (Gnome) and kubuntu (KDE)] official 7.04, 6.10, 6.06 images for VMWare Player &lt;br /&gt;
**&#039;&#039;&#039;Kubuntu-7.04-desktop-amd64.zip Tested OK&#039;&#039;&#039;&lt;br /&gt;
**&#039;&#039;&#039;Ubuntu-7.04-desktop-amd64.zip Tested OK&#039;&#039;&#039;&lt;br /&gt;
* Others?&lt;br /&gt;
&lt;br /&gt;
Note: The amd64 images work with Intel 64 Bit CPU&#039;s (ie. Core 2 Duo).&lt;br /&gt;
&lt;br /&gt;
When connected to the university network, NAT will allow internet access for the virtual machine. Ensure that any firewalls are configured to trust the VMWare Virtual Ethernet Adaptor.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Configuring Debian ==&lt;br /&gt;
&lt;br /&gt;
In order to do the first lab, you will likely want to install manpages-dev (as root):&lt;br /&gt;
 apt-get install manpages-dev&lt;br /&gt;
&lt;br /&gt;
In order to do the second lab, you will need the kernel sources:&lt;br /&gt;
  apt-get install linux-source-2.6.18&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Running_Linux_in_a_Virtual_Machine&amp;diff=1469</id>
		<title>Running Linux in a Virtual Machine</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Running_Linux_in_a_Virtual_Machine&amp;diff=1469"/>
		<updated>2007-09-24T17:07:13Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are two things you need to run Linux in a virtual machine: a virtual machine application and an image.&lt;br /&gt;
&lt;br /&gt;
==Choosing a virtual machine application==&lt;br /&gt;
&lt;br /&gt;
If you are running Windows, two popular options for running Linux in a virtual machine are:&lt;br /&gt;
&lt;br /&gt;
* [http://www.vmware.com/products/player/ VMWare Player] (commercial but free)&lt;br /&gt;
* [http://fabrice.bellard.free.fr/qemu/ QEMU] (open source)&lt;br /&gt;
* [http://www.virtualbox.org/ VirtualBox] (commercial w/ free trial and open source)&lt;br /&gt;
&lt;br /&gt;
If you are running OSX, two popular options for running Linux in a virtual machine are:&lt;br /&gt;
&lt;br /&gt;
* [http://www.parallels.com/ Parallels] (commercial - trial available)&lt;br /&gt;
* [http://www.vmware.com/products/fusion/ VMWare Fusion] (commercial)&lt;br /&gt;
&lt;br /&gt;
==Choosing a virtual machine image==&lt;br /&gt;
&lt;br /&gt;
You can do a fresh install of virtually any Linux distribution in most modern virtual machine environments, including [http://www.debian.org Debian] and [http://www.ubuntu.com Ubuntu].  However, it is easier to start with a prebuilt machine image.  Such images are often referred to as [http://en.wikipedia.org/wiki/Virtual_appliance virtual appliances].&lt;br /&gt;
&lt;br /&gt;
There are a variety of images available.  Please update the list below with your experiences running these virtual machines:&lt;br /&gt;
&lt;br /&gt;
* [http://www.visoracle.com/download/debian/ Debian 4.0 (Etch) image] for VMWare Player &#039;&#039;&#039;UNTESTED&#039;&#039;&#039;&lt;br /&gt;
* [http://www.vmware.com/vmtn/appliances/directory/954 Ubuntu 7.04 (Feisty Fawn) image] for VMWare Player &#039;&#039;&#039;UNTESTED&#039;&#039;&#039;&lt;br /&gt;
* [http://isv-image.ubuntu.com/vmware/ Ubuntu (Gnome) and kubuntu (KDE)] official 7.04, 6.10, 6.06 images for VMWare Player &lt;br /&gt;
**&#039;&#039;&#039;Kubuntu-7.04-desktop-amd64.zip Tested OK&#039;&#039;&#039;&lt;br /&gt;
**&#039;&#039;&#039;Ubuntu-7.04-desktop-amd64.zip Tested OK&#039;&#039;&#039;&lt;br /&gt;
* Others?&lt;br /&gt;
&lt;br /&gt;
Note: The amd64 images work with Intel 64 Bit CPU&#039;s (ie. Core 2 Duo).&lt;br /&gt;
&lt;br /&gt;
When connected to the university network, NAT will allow internet access for the virtual machine. Ensure that any firewalls are configured to trust the VMWare Virtual Ethernet Adaptor.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Configuring Debian ==&lt;br /&gt;
&lt;br /&gt;
In order to do the first lab, you will likely want to install manpages-dev (as root):&lt;br /&gt;
 apt-get install manpages-dev&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Running_Linux_in_a_Virtual_Machine&amp;diff=1468</id>
		<title>Running Linux in a Virtual Machine</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Running_Linux_in_a_Virtual_Machine&amp;diff=1468"/>
		<updated>2007-09-24T17:06:21Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Choosing a virtual machine application */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are two things you need to run Linux in a virtual machine: a virtual machine application and an image.&lt;br /&gt;
&lt;br /&gt;
==Choosing a virtual machine application==&lt;br /&gt;
&lt;br /&gt;
If you are running Windows, two popular options for running Linux in a virtual machine are:&lt;br /&gt;
&lt;br /&gt;
* [http://www.vmware.com/products/player/ VMWare Player] (commercial but free)&lt;br /&gt;
* [http://fabrice.bellard.free.fr/qemu/ QEMU] (open source)&lt;br /&gt;
* [http://www.virtualbox.org/ VirtualBox] (commercial w/ free trial and open source)&lt;br /&gt;
&lt;br /&gt;
If you are running OSX, two popular options for running Linux in a virtual machine are:&lt;br /&gt;
&lt;br /&gt;
* [http://www.parallels.com/ Parallels] (commercial - trial available)&lt;br /&gt;
* [http://www.vmware.com/products/fusion/ VMWare Fusion] (commercial)&lt;br /&gt;
&lt;br /&gt;
==Choosing a virtual machine image==&lt;br /&gt;
&lt;br /&gt;
You can do a fresh install of virtually any Linux distribution in most modern virtual machine environments, including [http://www.debian.org Debian] and [http://www.ubuntu.com Ubuntu].  However, it is easier to start with a prebuilt machine image.  Such images are often referred to as [http://en.wikipedia.org/wiki/Virtual_appliance virtual appliances].&lt;br /&gt;
&lt;br /&gt;
There are a variety of images available.  Please update the list below with your experiences running these virtual machines:&lt;br /&gt;
&lt;br /&gt;
* [http://www.visoracle.com/download/debian/ Debian 4.0 (Etch) image] for VMWare Player &#039;&#039;&#039;UNTESTED&#039;&#039;&#039;&lt;br /&gt;
* [http://www.vmware.com/vmtn/appliances/directory/954 Ubuntu 7.04 (Feisty Fawn) image] for VMWare Player &#039;&#039;&#039;UNTESTED&#039;&#039;&#039;&lt;br /&gt;
* [http://isv-image.ubuntu.com/vmware/ Ubuntu (Gnome) and kubuntu (KDE)] official 7.04, 6.10, 6.06 images for VMWare Player &lt;br /&gt;
**&#039;&#039;&#039;Kubuntu-7.04-desktop-amd64.zip Tested OK&#039;&#039;&#039;&lt;br /&gt;
**&#039;&#039;&#039;Ubuntu-7.04-desktop-amd64.zip Tested OK&#039;&#039;&#039;&lt;br /&gt;
* Others?&lt;br /&gt;
&lt;br /&gt;
Note: The amd64 images work with Intel 64 Bit CPU&#039;s (ie. Core 2 Duo).&lt;br /&gt;
&lt;br /&gt;
When connected to the university network, NAT will allow internet access for the virtual machine. Ensure that any firewalls are configured to trust the VMWare Virtual Ethernet Adaptor.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Running_Linux_in_a_Virtual_Machine&amp;diff=1462</id>
		<title>Running Linux in a Virtual Machine</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Running_Linux_in_a_Virtual_Machine&amp;diff=1462"/>
		<updated>2007-09-22T22:46:19Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Choosing a virtual machine application */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are two things you need to run Linux in a virtual machine: a virtual machine application and an image.&lt;br /&gt;
&lt;br /&gt;
==Choosing a virtual machine application==&lt;br /&gt;
&lt;br /&gt;
If you are running Windows, two popular options for running Linux in a virtual machine are:&lt;br /&gt;
&lt;br /&gt;
* [http://www.vmware.com/products/player/ VMWare Player] (commercial but free)&lt;br /&gt;
* [http://fabrice.bellard.free.fr/qemu/ QEMU] (open source)&lt;br /&gt;
* [http://www.virtualbox.org/ VirtualBox] (commercial w/ free trial and open source)&lt;br /&gt;
&lt;br /&gt;
If you are running OSX, two popular options for running Linux in a virtual machine are:&lt;br /&gt;
&lt;br /&gt;
* [http://www.parallels.com/ Parallels] (commercial)&lt;br /&gt;
* [http://www.vmware.com/products/fusion/ VMWare Fusion] (commercial)&lt;br /&gt;
&lt;br /&gt;
==Choosing a virtual machine image==&lt;br /&gt;
&lt;br /&gt;
You can do a fresh install of virtually any Linux distribution in most modern virtual machine environments, including [http://www.debian.org Debian] and [http://www.ubuntu.com Ubuntu].  However, it is easier to start with a prebuilt machine image.  Such images are often referred to as [http://en.wikipedia.org/wiki/Virtual_appliance virtual appliances].&lt;br /&gt;
&lt;br /&gt;
There are a variety of images available.  Please update the list below with your experiences running these virtual machines:&lt;br /&gt;
&lt;br /&gt;
* [http://www.visoracle.com/download/debian/ Debian 4.0 (Etch) image] for VMWare Player &#039;&#039;&#039;UNTESTED&#039;&#039;&#039;&lt;br /&gt;
* [http://www.vmware.com/vmtn/appliances/directory/954 Ubuntu 7.04 (Feisty Fawn) image] for VMWare Player &#039;&#039;&#039;UNTESTED&#039;&#039;&#039;&lt;br /&gt;
* Others?&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Using_the_Operating_System&amp;diff=1456</id>
		<title>Using the Operating System</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Using_the_Operating_System&amp;diff=1456"/>
		<updated>2007-09-18T02:24:01Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Term Paper */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These notes have not yet been reviewed for correctness.&lt;br /&gt;
&lt;br /&gt;
== Lecture 3: Using the Operating System ==&lt;br /&gt;
&lt;br /&gt;
=== Administrative ===&lt;br /&gt;
&lt;br /&gt;
==== Course Notes ====&lt;br /&gt;
The course webpage has been changed to point to this wiki.  Notes for lectures will continue to be posted here.&lt;br /&gt;
&lt;br /&gt;
We still need volunteers to take notes and put them up.  Don&#039;t forget the up to 3% bonus for doing so.&lt;br /&gt;
&lt;br /&gt;
==== Lab 1 ====&lt;br /&gt;
&lt;br /&gt;
Lab 1 will be up soon. Starting tomorrow, there are labs. Please show up.&lt;br /&gt;
&lt;br /&gt;
If you&#039;re clever, you can probably find most of the answers online.  Avoid looking up the answers&lt;br /&gt;
though, because you&#039;ll learn much more than the just answers if you explore&lt;br /&gt;
while using the computer.&lt;br /&gt;
&lt;br /&gt;
You&#039;ll do better on the tests if you do the labs.&lt;br /&gt;
&lt;br /&gt;
The point of this course is to build up a conceptual model of&lt;br /&gt;
how computers work.  This conceptual model is not made up of answers, its made up&lt;br /&gt;
of connections.  You&#039;ll start to make these connections by doing the labs.&lt;br /&gt;
&lt;br /&gt;
The Lab will posted as a PDF.  You can print it out to bring with it with you.&lt;br /&gt;
When you go to hand in the lab, print off your answers on a separate piece of paper.&lt;br /&gt;
Answers will be due in two weeks.&lt;br /&gt;
&lt;br /&gt;
All functioning lab machines are running Debian Linux 4.0 (etch).&lt;br /&gt;
They should be connected to the internet.  They should have a browser on them&lt;br /&gt;
called [http://en.wikipedia.org/wiki/Naming_conflict_between_Debian_and_Mozilla IceWeasel]. &lt;br /&gt;
Its really Mozilla Firefox.  Because Mozilla has trademarked the name Firefox,&lt;br /&gt;
in order to use their name, you have to use exactly their binary distribution.  Debian could&lt;br /&gt;
have had a waiver, but because Debian is about freedom, Debian didn&#039;t want&lt;br /&gt;
the users of Debian to be bound by the terms of the Firefox agreement.&lt;br /&gt;
&lt;br /&gt;
One thing you&#039;ll notice while studying operating systems is that there&#039;s a lot&lt;br /&gt;
of culture.  This is because users get used to a particular way of doing things.&lt;br /&gt;
For example, lots of us are probably used to Windows and how it works.  If&lt;br /&gt;
you changed the fundamentals of how Windows worked, many of us would be unhappy.&lt;br /&gt;
&lt;br /&gt;
Some of the things we&#039;ll be studying are based on decisions made long ago, often &lt;br /&gt;
arbitrarily of for a technical reason was true at the time.&lt;br /&gt;
Even if it was wrong the, or is now wrong, we&#039;re often stuck with it.&lt;br /&gt;
&lt;br /&gt;
We&#039;re going to look at some of the baggage in the operating systems as we progress in this course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the lab, we&#039;ll be given a set of questions in 2 parts.&lt;br /&gt;
&lt;br /&gt;
Part A we should be able to finish during the hour, give or take a few minutes.&lt;br /&gt;
If it takes you 3-4 hours, you&#039;re probably doing something wrong.&lt;br /&gt;
&lt;br /&gt;
Part B will take longer. Will need more research, and a bit more reading. You&lt;br /&gt;
should be working together. If you have trouble finding a buddy, talk to Dr.&lt;br /&gt;
S. Talk to each other to learn. The purpose isn&#039;t just about the right&lt;br /&gt;
answers, but more so about the operating systems.&lt;br /&gt;
&lt;br /&gt;
We&#039;re going to look at:&lt;br /&gt;
&lt;br /&gt;
- Processes, and how Unix deals with them.&lt;br /&gt;
- How are the parts of the system divided up.&lt;br /&gt;
- Dynamic libraries, where they are, where they fit in memory.&lt;br /&gt;
- What are the dependencies.&lt;br /&gt;
- How does the graphical subsystem fit in there.  X-Windows, the classical&lt;br /&gt;
Unix graphical environment.&lt;br /&gt;
- Practice on the command line. (If you&#039;re wondering where these fit in&lt;br /&gt;
in modern graphical environments, read Neal Stephenson&#039;s essay)&lt;br /&gt;
&lt;br /&gt;
Try to finish part A in the lab.  &lt;br /&gt;
Answers due in class 2 weeks from today.&lt;br /&gt;
&lt;br /&gt;
==== Term Paper ====&lt;br /&gt;
&lt;br /&gt;
For the Term Paper... You don&#039;t have to do a pure literature review.  You could&lt;br /&gt;
do an original operating system extension.  There&#039;s one caveat:  You&#039;ve got&lt;br /&gt;
to get what you&#039;re going to do with Professor Somayaji.  You have to get permission&lt;br /&gt;
a week before the outline is due (The paper outline is due Oct 29).&lt;br /&gt;
&lt;br /&gt;
What types of things is he thinking of: Say you wanted to implement a new&lt;br /&gt;
file-system? This is inherently more work, because you still have to give a&lt;br /&gt;
nice write-up. The report should still cite other work.&lt;br /&gt;
&lt;br /&gt;
All of us should have started the process by next week, even if its just&lt;br /&gt;
googling for 15 minutes. Just google and see what results come up. IF you&lt;br /&gt;
start now, you&#039;ll have time to pick a topic that you like, instead of the&lt;br /&gt;
first thing that comes along.  Its better to work on something you like,&lt;br /&gt;
instead of stuck reading papers you&#039;re not interested in.&lt;br /&gt;
&lt;br /&gt;
If you want to find good OS papers:&lt;br /&gt;
* USENIX association has a number of systems oriented conferences.&lt;br /&gt;
** OSDI &lt;br /&gt;
** USENIX Annual Technical Conference&lt;br /&gt;
** LISA&lt;br /&gt;
&lt;br /&gt;
=== Using the Operating System ===&lt;br /&gt;
&lt;br /&gt;
Chapter 2 looks at the programming model of an operating system. &lt;br /&gt;
The operating system provides certain abstractions to help programmers work with it.&lt;br /&gt;
&lt;br /&gt;
What are some examples of abstractions?&lt;br /&gt;
&lt;br /&gt;
==== Files ====&lt;br /&gt;
&lt;br /&gt;
A file is a metaphor.  What was the original metaphor?  The manilla&lt;br /&gt;
coloured folder that we put paper in.  Its interesting to note that a file is used&lt;br /&gt;
to hold many pages or documents, but that a computer file is a single document.  Instead, a directory&lt;br /&gt;
holds many files, which are each generally one document. &lt;br /&gt;
The metaphor hasn&#039;t made much sense for a long time, but it is still in use.&lt;br /&gt;
&lt;br /&gt;
What is a file?  &lt;br /&gt;
&lt;br /&gt;
A file is a bytestream you can read and write from.&lt;br /&gt;
&lt;br /&gt;
We also have an abstraction called a byte, 256 possible values, 0-255.&lt;br /&gt;
We as computer scientists think we can represent just about anything&lt;br /&gt;
with these.&lt;br /&gt;
&lt;br /&gt;
; file :  named bytestream(s). &lt;br /&gt;
&lt;br /&gt;
In modern operating systems there are potentially more than &lt;br /&gt;
one bytestream in a file.  When there is more than one bytestream, we&lt;br /&gt;
call this a forked file.&lt;br /&gt;
&lt;br /&gt;
An early operating system that used forked files was OS 9.&lt;br /&gt;
On a traditional system, you get a sequence of bytes&lt;br /&gt;
when you open a file.  In a forked file, when you read it, you get some data,&lt;br /&gt;
but there is also other data hanging around.  We&#039;ll talk about that later.&lt;br /&gt;
&lt;br /&gt;
The standard API calls for a file are:&lt;br /&gt;
&lt;br /&gt;
* open&lt;br /&gt;
* read&lt;br /&gt;
* write&lt;br /&gt;
* close&lt;br /&gt;
* seek&lt;br /&gt;
&lt;br /&gt;
As well as a other operations that one might need to perform on files, such as:&lt;br /&gt;
* truncate&lt;br /&gt;
* append - (seek to end of file and write)&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
Why open and close?  Why can&#039;t we just operate on a filename?  &lt;br /&gt;
Because it (usually) takes a long time to go through the filesystem to find the files.&lt;br /&gt;
Open and close are optimizations -- the abstraction is a stateful interface.&lt;br /&gt;
You start by using open to obtain some sort of &amp;quot;handle&amp;quot; representing the file, &lt;br /&gt;
and pass this &amp;quot;handle&amp;quot; value to read and write.  When you&#039;re done, closing the &lt;br /&gt;
file frees the resources allocated when opening the file.  On most systems&lt;br /&gt;
you can only have a specific number of files open at any given time.&lt;br /&gt;
&lt;br /&gt;
There are some filesystems where open and close don&#039;t do much of anything,&lt;br /&gt;
such as some networked filesystems.&lt;br /&gt;
&lt;br /&gt;
File represent storage, on disks... They&#039;re random access.  If they&#039;re random access&lt;br /&gt;
like RAM, why don&#039;t we access disks like we access RAM?   &lt;br /&gt;
Why couldn&#039;t we just allocate objects as we need them?  We could indeed do this, but&lt;br /&gt;
it turns out that there&#039;s a reason that we don&#039;t generally do this.&lt;br /&gt;
&lt;br /&gt;
The file interface is a procedural interface.  &lt;br /&gt;
&lt;br /&gt;
One nice things about files is they&#039;re a minimal functionality interface.  The concept of&lt;br /&gt;
minimal functionality is a recurring theme you&#039;ll find when we discuss  filesystems.  &lt;br /&gt;
&lt;br /&gt;
The abstraction used to interface the filesystem&lt;br /&gt;
shouldn&#039;t prohibit you from creating particular forms of applications.  If we chose &lt;br /&gt;
to use an object model, we&#039;d be implying you don&#039;t want to give arbitrary access to the&lt;br /&gt;
data on disk, as objects tend to encapsulate their data.&lt;br /&gt;
&lt;br /&gt;
The abstraction listed above is the minimal abstraction for efficiently&lt;br /&gt;
managing persistent storage (disks).&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t necessarily mean this is the most absolutely minimal abstraction.  &lt;br /&gt;
An even more minimal abstraction would be&lt;br /&gt;
to just treat storage devices as a bunch of fixed size blocks.  However, that&#039;s getting too low level,&lt;br /&gt;
because now all programs have to worry about where they put files.&lt;br /&gt;
&lt;br /&gt;
Because the files abstract model is reasonably good, its stuck around for decades.&lt;br /&gt;
&lt;br /&gt;
Fundamentally, though, its a legacy.  Some models of filesystems try to get&lt;br /&gt;
away from it.  Look at the PalmOS - it resisted having files for a long time, but eventually&lt;br /&gt;
gave in to support removable media, but the primary OS and API still don&#039;t support files.  &lt;br /&gt;
Microsoft&#039;s been wanting to get away from the legacy files abstraction too, but &lt;br /&gt;
somehow it doesn&#039;t seem to happen.&lt;br /&gt;
&lt;br /&gt;
==== Processes and Threads ====&lt;br /&gt;
&lt;br /&gt;
There&#039;s lots of other devices, but from an OS level, there are two other big ones:&lt;br /&gt;
CPU and RAM.  These two are generally abstracted with processes.&lt;br /&gt;
The process is the basic abstraction in operating systems for these two,&lt;br /&gt;
but is not the only abstraction.  There are also threads.&lt;br /&gt;
&lt;br /&gt;
CPU + RAM are abstracted as:&lt;br /&gt;
* processes&lt;br /&gt;
* threads&lt;br /&gt;
&lt;br /&gt;
A process may have multiple threads.  A thread shares memory with its processes.&lt;br /&gt;
&lt;br /&gt;
* A process is an exclusive allocation of CPU and RAM.&lt;br /&gt;
* A thread is a non-exclusive allocation of RAM within a process,&lt;br /&gt;
but is an exclusive allocation of CPU.&lt;br /&gt;
* One or more threads constitute a process.&lt;br /&gt;
&lt;br /&gt;
Another way to talk about processes is in terms of address spaces and&lt;br /&gt;
execution context: &lt;br /&gt;
* An address space is just a virtual version of RAM. It may&lt;br /&gt;
be instantiated in physical memory, it may not be. Its a set of addresses you&lt;br /&gt;
can call your own. &lt;br /&gt;
* Execution context is CPU state (Registers, processor status&lt;br /&gt;
words, etc.). There&#039;s lots of state surrounding the processor when its running&lt;br /&gt;
a program.  This state can be saved, and then restored later to resume execution&lt;br /&gt;
at a later time.&lt;br /&gt;
&lt;br /&gt;
* A thread is one execution context matched with an address space.&lt;br /&gt;
* A process is one or more execution contexts plus an address space.&lt;br /&gt;
* A single-threaded process has one execution context, and one address space.&lt;br /&gt;
* A multithreaded process has multiple execution contexts, and one address space.&lt;br /&gt;
&lt;br /&gt;
The concept of multiple address spaces is somewhat new in modern computing.&lt;br /&gt;
However, if you go back to the old days of MS-DOS, there was only one address space, the&lt;br /&gt;
physical address space.  We used to have things like TSRs, a 640kb limit,&lt;br /&gt;
etc.  There was no virtualization of memory.  In order to run at the same&lt;br /&gt;
time, they had to co-exist in the physical memory address space.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have multiple address spaces, you don&#039;t have processes and threads.&lt;br /&gt;
At best, you have threads, sharing the one address space you have.&lt;br /&gt;
&lt;br /&gt;
Historically, threads were abstracted differently than now: These are&lt;br /&gt;
capitalized here to differentiate them from the newer terms: FORK, JOIN, QUIT.&lt;br /&gt;
&lt;br /&gt;
Why FORK?  Think of a fork in the road.  You&#039;re going along, then things split.&lt;br /&gt;
A FORK is supposed to represent that split.&lt;br /&gt;
&lt;br /&gt;
By FORKing, the main thing to note is that you&#039;re creating two execution&lt;br /&gt;
contexts, that may be sharing memory. The execution may start at the same&lt;br /&gt;
place, but may be diverging. How do you stop creating more and more and more&lt;br /&gt;
of these, to bring them back under control or stop them? That&#039;s the JOIN&lt;br /&gt;
operation. Each thread tracks how many threads are running, if you JOIN and&lt;br /&gt;
you&#039;re not the last one running, you just go away, otherwise you need to&lt;br /&gt;
synchronize back into the main thread.&lt;br /&gt;
&lt;br /&gt;
What&#039;s QUIT? QUIT stops the the whole program -- all execution. It will cut&lt;br /&gt;
all threads off, even if the thread is one of the branches and not the main&lt;br /&gt;
thread.&lt;br /&gt;
&lt;br /&gt;
This was one of the earliest ways to abstract multiple execution context.&lt;br /&gt;
&lt;br /&gt;
What if, when you did the fork, you made a copy of the entire process? There&lt;br /&gt;
are now two separate instances of the program, with the same state? The&lt;br /&gt;
difference here, is if you quit one, the other will stay around -- but the&lt;br /&gt;
difference is more profound: they&#039;re not sharing the same address space (nor&lt;br /&gt;
execution context). This is the Unix model of processes.&lt;br /&gt;
&lt;br /&gt;
In the Unix process model the system starts with only one process: init. It&lt;br /&gt;
starts running, then it creates a copy of itself with fork, then another, etc.&lt;br /&gt;
&lt;br /&gt;
[[Image:Comp3000-process-tree.png]]&lt;br /&gt;
&lt;br /&gt;
In this diagram, What is the value of &#039;&#039;x?&#039;&#039; on the bottom-left-most branch?&lt;br /&gt;
&#039;&#039;x&#039;&#039; is 5 in the Unix process model. However, if this was multithreaded,&lt;br /&gt;
&#039;&#039;x&#039;&#039; could be 7 or 5, depending on how fast the threads are running. It might&lt;br /&gt;
be 5 if the thread asking for the value of &#039;&#039;x&#039;&#039; runs before the thread setting &#039;&#039;x&#039;&#039;&lt;br /&gt;
to 7. This is known as a race condition, &lt;br /&gt;
because we don&#039;t know which thread will run or finish first.&lt;br /&gt;
&lt;br /&gt;
In Unix, they decided to make it easy and have different processes. These&lt;br /&gt;
processes can&#039;t change the state of their parents or children. &lt;br /&gt;
To share a value, you have to set the value before forking.  (Or through other means)&lt;br /&gt;
&lt;br /&gt;
There&#039;s a small glitch with what we&#039;ve said so far about Unix processes: That&lt;br /&gt;
they have exactly the same state when you fork. If this was true, they&#039;d&lt;br /&gt;
always do the same thing.  How do they know that they&#039;re different?&lt;br /&gt;
&lt;br /&gt;
Turns out that Unix fork is very simple, yet it helps with this.&lt;br /&gt;
The idiom you&#039;ll usually see is:&lt;br /&gt;
  &lt;br /&gt;
 pid = fork();&lt;br /&gt;
  &lt;br /&gt;
fork takes no arguments.&lt;br /&gt;
&lt;br /&gt;
When you fork, the result of fork is the pid (process ID) of the new process,&lt;br /&gt;
or 0 if you&#039;re the child.&lt;br /&gt;
&lt;br /&gt;
The tree of processes effectively becomes a family tree. (However, with some&lt;br /&gt;
bizarre genealogy that we&#039;ll see later)&lt;br /&gt;
&lt;br /&gt;
What you usually do is check the value of pid, and if its 0, do one thing,&lt;br /&gt;
otherwise do something else. If pid is nonzero, it is the pid of the child&lt;br /&gt;
process we just created by forking. You usually use this to track your child.&lt;br /&gt;
The classic use of fork is to create disposable children that do a specific&lt;br /&gt;
task for a short while, then go away.&lt;br /&gt;
&lt;br /&gt;
The nice thing about this model is that it keeps things separate. You don&#039;t&lt;br /&gt;
need to worry about what the child is doing. If you want to communicate, you&lt;br /&gt;
have to explicitly set up to do this. There are some standard ways of doing&lt;br /&gt;
that communication.  We&#039;ll look at these later too.&lt;br /&gt;
&lt;br /&gt;
So now we know how to make new processes? How do we do something different? In&lt;br /&gt;
principle we don&#039;t need anything else. We could open a file, read new code,&lt;br /&gt;
then jump to the new code.  However, we have the idea of exec(). &lt;br /&gt;
Exec  replaces the running program with the specified program, but preserves&lt;br /&gt;
the pid.&lt;br /&gt;
&lt;br /&gt;
In Unix, to start a new program you usually fork() then you exec() the desired&lt;br /&gt;
program on the child.  If you don&#039;t fork() first, then exec() will kill the&lt;br /&gt;
original process, replacing it with the program you called exec on.&lt;br /&gt;
&lt;br /&gt;
Exec causes the kernel to throw away the old address space, and give a new&lt;br /&gt;
address space, with the new binary.  The pid stays the same though.&lt;br /&gt;
&lt;br /&gt;
The Windows equivalent is CreateProcess()&lt;br /&gt;
&lt;br /&gt;
CreateProcess() takes lots of arguments about how to create the new&lt;br /&gt;
process (what to load, permissions, etc).  Fork takes none.  With fork(), &lt;br /&gt;
you can set things up yourself, and most of the settings will carry over to&lt;br /&gt;
the new program. (Including open files).  Note how different these two are.&lt;br /&gt;
&lt;br /&gt;
In Unix, you have the building blocks to do things, and you have to put them&lt;br /&gt;
together yourself. In Windows, you have the single API call to do them all at&lt;br /&gt;
once. Neither is strictly right or wrong.&lt;br /&gt;
&lt;br /&gt;
On older systems, when a big process was forked, everything was copied. On&lt;br /&gt;
newer systems, fork doesn&#039;t necessarily copy everything. With virtual memory&lt;br /&gt;
you can share much of the memory between two processes.&lt;br /&gt;
&lt;br /&gt;
In older APIs there was vfork() - suspend parent, fork, exec, then let the&lt;br /&gt;
parent and child both start to go again.  This idea avoided the copying when&lt;br /&gt;
the first thing you were going to do was exec.&lt;br /&gt;
&lt;br /&gt;
The basic idea to make this efficient is that the descriptions of the virtual&lt;br /&gt;
memory address spaces don&#039;t have to be mutually exclusive.  You could have&lt;br /&gt;
10 programs sharing portions of their address space -- such as the read-only&lt;br /&gt;
portions like the program code, but not the read-write portions.&lt;br /&gt;
&lt;br /&gt;
What if you didn&#039;t want to do an exec after forking? A classic one is a&lt;br /&gt;
daemon. One listening on the network for an incoming connection. When that&lt;br /&gt;
incoming request comes in, the main program can deal with the request, but it&lt;br /&gt;
would also have to keep checking for more requests at the same time. Instead,&lt;br /&gt;
in Unix the typically idiom is to fork off a child to process that connection,&lt;br /&gt;
and then go back and wait for more.&lt;br /&gt;
&lt;br /&gt;
You can have shared memory.  But the default model for processes is&lt;br /&gt;
that nothing is shared, but the threading model is everything is shared.&lt;br /&gt;
For threads you have to implement protections, but for processes, you have to&lt;br /&gt;
opt-in to share.&lt;br /&gt;
&lt;br /&gt;
Processes win out on reliability: fewer chances for errors.  You control&lt;br /&gt;
exactly what state is shared.&lt;br /&gt;
&lt;br /&gt;
Another thing we&#039;ll talk about later regarding threads versus processes is&lt;br /&gt;
how does this play on multiple cores?  This depends on the implementation,&lt;br /&gt;
and sometimes is a little tricky.&lt;br /&gt;
&lt;br /&gt;
Chapter two is talking about the model presented to the programmer.  An API&lt;br /&gt;
for your processes and threads to talk to the world.&lt;br /&gt;
&lt;br /&gt;
This course is fundamentally about how these things are implemented. Its&lt;br /&gt;
useful to know about these tricks, so that you know how the computer is used.&lt;br /&gt;
It turns out the same tricks are useful in lots of other circumstances. Such&lt;br /&gt;
as concurrency - when you have to deal with an application that has to deal&lt;br /&gt;
with this - which happens to be most applications. You&#039;ll learn this because&lt;br /&gt;
the OS guys did this first.&lt;br /&gt;
&lt;br /&gt;
==== Graphics ====&lt;br /&gt;
&lt;br /&gt;
This part of the lecture should help you with the lab.  Its about graphics.&lt;br /&gt;
&lt;br /&gt;
We&#039;ve talked about some standard abstractions so far: files, processes, threads.&lt;br /&gt;
&lt;br /&gt;
However, the thing you really interact with is the keyboard, mouse, and&lt;br /&gt;
display. In the standard Unix model, these are not a part of the operating&lt;br /&gt;
system. They&#039;re implemented in an application.&lt;br /&gt;
&lt;br /&gt;
The Unix philosophy is that if you don&#039;t have to put it in the kernel,&lt;br /&gt;
don&#039;t put it there, or if you do, make it interchangeable.&lt;br /&gt;
&lt;br /&gt;
The standard way to do graphics in Unix is X-Windows, or X for short.&lt;br /&gt;
Before X there was the W system.  There was a Y system at one point,&lt;br /&gt;
as well as Sun NeWs.&lt;br /&gt;
&lt;br /&gt;
There was also a system called Display Postscript.  Postscript is a&lt;br /&gt;
fully fledged programming language.  Originally used for printers.&lt;br /&gt;
It was developed for laser printers, by a little company called Adobe.&lt;br /&gt;
When laser printers came out, they had really high resolutions.  It was&lt;br /&gt;
hard to get the data necessary to print a page to the printer fast&lt;br /&gt;
enough...  So postscript programs were sent to the printer.  In the&lt;br /&gt;
early days of the MacIntosh, the processor in the printer was more powerful&lt;br /&gt;
than the processor in the computer.  Postscript is a funny little language.&lt;br /&gt;
Its a post-fix operator language.  Instead of saying things like &amp;quot;4+5&amp;quot; you&lt;br /&gt;
say &amp;quot;4 5 +&amp;quot; -- you push them onto the stack, then run an operator on them.&lt;br /&gt;
The same with function calls.&lt;br /&gt;
&lt;br /&gt;
In the 80s, there were many competing technologies for how to do graphics&lt;br /&gt;
in the Unix world.  X won.  But Display Postscript also kind of won,&lt;br /&gt;
because Macs use Display PDF in a system called Quartz, which was&lt;br /&gt;
created as a successor to Display Postscript. Because Postscript was linear,&lt;br /&gt;
it was hard to parallelize.  PDF is easier to parallelize.&lt;br /&gt;
&lt;br /&gt;
NeXT was the one that used Display Postscript first... NeXT was founded by&lt;br /&gt;
Steve Jobs. OS X is Unix with Display PDF... And you can run X-Windows on top&lt;br /&gt;
of that.&lt;br /&gt;
&lt;br /&gt;
X-Windows lets you open windows on remote computers. The way you create a&lt;br /&gt;
window on your local computer is the same way that you open a window on a&lt;br /&gt;
remote computer, 1000s of miles away. X is based on something called the X&lt;br /&gt;
Window Protocol. It just happens to work locally as well (with some&lt;br /&gt;
optimization like shared memory), but the messages were designed to work well&lt;br /&gt;
over ethernet.&lt;br /&gt;
&lt;br /&gt;
This was created by folks that wanted to talk to hundreds of computers, such&lt;br /&gt;
as the supercomputer in another room... but they wanted to see the windows&lt;br /&gt;
on their own computer.&lt;br /&gt;
&lt;br /&gt;
Consider what you have to do to see a remote window in Windows. You fire up&lt;br /&gt;
Remote Desktop Client, and you get the whole desktop remotely. If you want to&lt;br /&gt;
do 10 computers, you end up with 10 windows with 10 desktops and 10 start&lt;br /&gt;
buttons. This difference is a result of X-Windows being designed for networks&lt;br /&gt;
and Windows being designed for one computer.&lt;br /&gt;
&lt;br /&gt;
The terminology for X-Windows is a bit backwards from what we&#039;re used to: The&lt;br /&gt;
server is what we mostly think of of as a client.  The server is what controls&lt;br /&gt;
access to the display: it runs where your display is to control your display,&lt;br /&gt;
mouse, keyboard... And to display a window, remotely or locally, you run a&lt;br /&gt;
program known as a client in X-Windows which connects over the network to&lt;br /&gt;
display a window on your X-Windows server.&lt;br /&gt;
&lt;br /&gt;
A funny thing about X is it took the abstraction to an extreme. The people who&lt;br /&gt;
created X-Windows didn&#039;t know anything about usability or graphics or art. The&lt;br /&gt;
original X-Windows tools were created by regular programmers. Technically&lt;br /&gt;
underneath its very nice.. But they knew they didn&#039;t know, so they made it so&lt;br /&gt;
the user could decide what it should look like themselves. So that you can&lt;br /&gt;
just switch out a few programs and things keep on working.&lt;br /&gt;
&lt;br /&gt;
This means that when you do things like moving your mouse to a window -- what&lt;br /&gt;
happens? Do you take focus or not? This is something known as click to focus.&lt;br /&gt;
In older X Systems, you could just point your mouse there, and focus followed.&lt;br /&gt;
This is potentially very efficient, but also very confusing if you&#039;re not used&lt;br /&gt;
to it... Or how do you handle key sequences, or minimize? Who decides how to&lt;br /&gt;
do this all? They had the idea of something called a Window Manager. This goes&lt;br /&gt;
back to X Servers providing the technical minimums so that you&#039;re not limited&lt;br /&gt;
to one behaviour. The Window Manager is just another X client, with some&lt;br /&gt;
special privileges so it can run anywhere. It could run 1000s of miles away.&lt;br /&gt;
&lt;br /&gt;
This is why on Linux there&#039;s Gnome, KDE, etc. There&#039;s Motif, GTK, QT, IceWM,&lt;br /&gt;
Aferstep, Blackbox, Sawfish, fvwm, twm. Etc. Other graphical toolkits too,&lt;br /&gt;
abstracted away. These choices are all available there because the X-Windows&lt;br /&gt;
people left it very open by not making the choice for us. This does make&lt;br /&gt;
things a little confusing at times, though, because each application could&lt;br /&gt;
have different assumptions.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Using_the_Operating_System&amp;diff=1455</id>
		<title>Using the Operating System</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Using_the_Operating_System&amp;diff=1455"/>
		<updated>2007-09-18T01:21:46Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These notes have not yet been reviewed for correctness.&lt;br /&gt;
&lt;br /&gt;
== Lecture 3: Using the Operating System ==&lt;br /&gt;
&lt;br /&gt;
=== Administrative ===&lt;br /&gt;
&lt;br /&gt;
==== Course Notes ====&lt;br /&gt;
The course webpage has been changed to point to this wiki.  Notes for lectures will continue to be posted here.&lt;br /&gt;
&lt;br /&gt;
We still need volunteers to take notes and put them up.  Don&#039;t forget the up to 3% bonus for doing so.&lt;br /&gt;
&lt;br /&gt;
==== Lab 1 ====&lt;br /&gt;
&lt;br /&gt;
Lab 1 will be up soon. Starting tomorrow, there are labs. Please show up.&lt;br /&gt;
&lt;br /&gt;
If you&#039;re clever, you can probably find most of the answers online.  Avoid looking up the answers&lt;br /&gt;
though, because you&#039;ll learn much more than the just answers if you explore&lt;br /&gt;
while using the computer.&lt;br /&gt;
&lt;br /&gt;
You&#039;ll do better on the tests if you do the labs.&lt;br /&gt;
&lt;br /&gt;
The point of this course is to build up a conceptual model of&lt;br /&gt;
how computers work.  This conceptual model is not made up of answers, its made up&lt;br /&gt;
of connections.  You&#039;ll start to make these connections by doing the labs.&lt;br /&gt;
&lt;br /&gt;
The Lab will posted as a PDF.  You can print it out to bring with it with you.&lt;br /&gt;
When you go to hand in the lab, print off your answers on a separate piece of paper.&lt;br /&gt;
Answers will be due in two weeks.&lt;br /&gt;
&lt;br /&gt;
All functioning lab machines are running Debian Linux 4.0 (etch).&lt;br /&gt;
They should be connected to the internet.  They should have a browser on them&lt;br /&gt;
called [http://en.wikipedia.org/wiki/Naming_conflict_between_Debian_and_Mozilla IceWeasel]. &lt;br /&gt;
Its really Mozilla Firefox.  Because Mozilla has trademarked the name Firefox,&lt;br /&gt;
in order to use their name, you have to use exactly their binary distribution.  Debian could&lt;br /&gt;
have had a waiver, but because Debian is about freedom, Debian didn&#039;t want&lt;br /&gt;
the users of Debian to be bound by the terms of the Firefox agreement.&lt;br /&gt;
&lt;br /&gt;
One thing you&#039;ll notice while studying operating systems is that there&#039;s a lot&lt;br /&gt;
of culture.  This is because users get used to a particular way of doing things.&lt;br /&gt;
For example, lots of us are probably used to Windows and how it works.  If&lt;br /&gt;
you changed the fundamentals of how Windows worked, many of us would be unhappy.&lt;br /&gt;
&lt;br /&gt;
Some of the things we&#039;ll be studying are based on decisions made long ago, often &lt;br /&gt;
arbitrarily of for a technical reason was true at the time.&lt;br /&gt;
Even if it was wrong the, or is now wrong, we&#039;re often stuck with it.&lt;br /&gt;
&lt;br /&gt;
We&#039;re going to look at some of the baggage in the operating systems as we progress in this course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the lab, we&#039;ll be given a set of questions in 2 parts.&lt;br /&gt;
&lt;br /&gt;
Part A we should be able to finish during the hour, give or take a few minutes.&lt;br /&gt;
If it takes you 3-4 hours, you&#039;re probably doing something wrong.&lt;br /&gt;
&lt;br /&gt;
Part B will take longer. Will need more research, and a bit more reading. You&lt;br /&gt;
should be working together. If you have trouble finding a buddy, talk to Dr.&lt;br /&gt;
S. Talk to each other to learn. The purpose isn&#039;t just about the right&lt;br /&gt;
answers, but more so about the operating systems.&lt;br /&gt;
&lt;br /&gt;
We&#039;re going to look at:&lt;br /&gt;
&lt;br /&gt;
- Processes, and how Unix deals with them.&lt;br /&gt;
- How are the parts of the system divided up.&lt;br /&gt;
- Dynamic libraries, where they are, where they fit in memory.&lt;br /&gt;
- What are the dependencies.&lt;br /&gt;
- How does the graphical subsystem fit in there.  X-Windows, the classical&lt;br /&gt;
Unix graphical environment.&lt;br /&gt;
- Practice on the command line. (If you&#039;re wondering where these fit in&lt;br /&gt;
in modern graphical environments, read Neal Stephenson&#039;s essay)&lt;br /&gt;
&lt;br /&gt;
Try to finish part A in the lab.  &lt;br /&gt;
Answers due in class 2 weeks from today.&lt;br /&gt;
&lt;br /&gt;
==== Term Paper ====&lt;br /&gt;
&lt;br /&gt;
For the Term Paper... You don&#039;t have to do a pure literature review.  You could&lt;br /&gt;
do an original operating system extension.  There&#039;s one caveat:  You&#039;ve got&lt;br /&gt;
to get what you&#039;re going to do with the professor first.  You have to tell&lt;br /&gt;
him a week before the outline is due Oct 22nd.&lt;br /&gt;
&lt;br /&gt;
What types of things is he thinking of: Say you wanted to implement a new&lt;br /&gt;
file-system? This is inherently more work, because you still have to give a&lt;br /&gt;
nice write-up. The report should still cite other work.&lt;br /&gt;
&lt;br /&gt;
All of us should have started the process by next week, even if its just&lt;br /&gt;
googling for 15 minutes. Just google and see what results come up. IF you&lt;br /&gt;
start now, you&#039;ll have time to pick a topic that you like, instead of the&lt;br /&gt;
first thing that comes along.  Its better to work on something you like,&lt;br /&gt;
instead of stuck reading papers you&#039;re not interested in.&lt;br /&gt;
&lt;br /&gt;
If you want to find good OS papers:&lt;br /&gt;
* USENIX association has a number of systems oriented conferences.&lt;br /&gt;
** OSDI &lt;br /&gt;
** USENIX Annual Technical Conference&lt;br /&gt;
** LISA&lt;br /&gt;
&lt;br /&gt;
=== Using the Operating System ===&lt;br /&gt;
&lt;br /&gt;
Chapter 2 looks at the programming model of an operating system. &lt;br /&gt;
The operating system provides certain abstractions to help programmers work with it.&lt;br /&gt;
&lt;br /&gt;
What are some examples of abstractions?&lt;br /&gt;
&lt;br /&gt;
==== Files ====&lt;br /&gt;
&lt;br /&gt;
A file is a metaphor.  What was the original metaphor?  The manilla&lt;br /&gt;
coloured folder that we put paper in.  Its interesting to note that a file is used&lt;br /&gt;
to hold many pages or documents, but that a computer file is a single document.  Instead, a directory&lt;br /&gt;
holds many files, which are each generally one document. &lt;br /&gt;
The metaphor hasn&#039;t made much sense for a long time, but it is still in use.&lt;br /&gt;
&lt;br /&gt;
What is a file?  &lt;br /&gt;
&lt;br /&gt;
A file is a bytestream you can read and write from.&lt;br /&gt;
&lt;br /&gt;
We also have an abstraction called a byte, 256 possible values, 0-255.&lt;br /&gt;
We as computer scientists think we can represent just about anything&lt;br /&gt;
with these.&lt;br /&gt;
&lt;br /&gt;
; file :  named bytestream(s). &lt;br /&gt;
&lt;br /&gt;
In modern operating systems there are potentially more than &lt;br /&gt;
one bytestream in a file.  When there is more than one bytestream, we&lt;br /&gt;
call this a forked file.&lt;br /&gt;
&lt;br /&gt;
An early operating system that used forked files was OS 9.&lt;br /&gt;
On a traditional system, you get a sequence of bytes&lt;br /&gt;
when you open a file.  In a forked file, when you read it, you get some data,&lt;br /&gt;
but there is also other data hanging around.  We&#039;ll talk about that later.&lt;br /&gt;
&lt;br /&gt;
The standard API calls for a file are:&lt;br /&gt;
&lt;br /&gt;
* open&lt;br /&gt;
* read&lt;br /&gt;
* write&lt;br /&gt;
* close&lt;br /&gt;
* seek&lt;br /&gt;
&lt;br /&gt;
As well as a other operations that one might need to perform on files, such as:&lt;br /&gt;
* truncate&lt;br /&gt;
* append - (seek to end of file and write)&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
Why open and close?  Why can&#039;t we just operate on a filename?  &lt;br /&gt;
Because it (usually) takes a long time to go through the filesystem to find the files.&lt;br /&gt;
Open and close are optimizations -- the abstraction is a stateful interface.&lt;br /&gt;
You start by using open to obtain some sort of &amp;quot;handle&amp;quot; representing the file, &lt;br /&gt;
and pass this &amp;quot;handle&amp;quot; value to read and write.  When you&#039;re done, closing the &lt;br /&gt;
file frees the resources allocated when opening the file.  On most systems&lt;br /&gt;
you can only have a specific number of files open at any given time.&lt;br /&gt;
&lt;br /&gt;
There are some filesystems where open and close don&#039;t do much of anything,&lt;br /&gt;
such as some networked filesystems.&lt;br /&gt;
&lt;br /&gt;
File represent storage, on disks... They&#039;re random access.  If they&#039;re random access&lt;br /&gt;
like RAM, why don&#039;t we access disks like we access RAM?   &lt;br /&gt;
Why couldn&#039;t we just allocate objects as we need them?  We could indeed do this, but&lt;br /&gt;
it turns out that there&#039;s a reason that we don&#039;t generally do this.&lt;br /&gt;
&lt;br /&gt;
The file interface is a procedural interface.  &lt;br /&gt;
&lt;br /&gt;
One nice things about files is they&#039;re a minimal functionality interface.  The concept of&lt;br /&gt;
minimal functionality is a recurring theme you&#039;ll find when we discuss  filesystems.  &lt;br /&gt;
&lt;br /&gt;
The abstraction used to interface the filesystem&lt;br /&gt;
shouldn&#039;t prohibit you from creating particular forms of applications.  If we chose &lt;br /&gt;
to use an object model, we&#039;d be implying you don&#039;t want to give arbitrary access to the&lt;br /&gt;
data on disk, as objects tend to encapsulate their data.&lt;br /&gt;
&lt;br /&gt;
The abstraction listed above is the minimal abstraction for efficiently&lt;br /&gt;
managing persistent storage (disks).&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t necessarily mean this is the most absolutely minimal abstraction.  &lt;br /&gt;
An even more minimal abstraction would be&lt;br /&gt;
to just treat storage devices as a bunch of fixed size blocks.  However, that&#039;s getting too low level,&lt;br /&gt;
because now all programs have to worry about where they put files.&lt;br /&gt;
&lt;br /&gt;
Because the files abstract model is reasonably good, its stuck around for decades.&lt;br /&gt;
&lt;br /&gt;
Fundamentally, though, its a legacy.  Some models of filesystems try to get&lt;br /&gt;
away from it.  Look at the PalmOS - it resisted having files for a long time, but eventually&lt;br /&gt;
gave in to support removable media, but the primary OS and API still don&#039;t support files.  &lt;br /&gt;
Microsoft&#039;s been wanting to get away from the legacy files abstraction too, but &lt;br /&gt;
somehow it doesn&#039;t seem to happen.&lt;br /&gt;
&lt;br /&gt;
==== Processes and Threads ====&lt;br /&gt;
&lt;br /&gt;
There&#039;s lots of other devices, but from an OS level, there are two other big ones:&lt;br /&gt;
CPU and RAM.  These two are generally abstracted with processes.&lt;br /&gt;
The process is the basic abstraction in operating systems for these two,&lt;br /&gt;
but is not the only abstraction.  There are also threads.&lt;br /&gt;
&lt;br /&gt;
CPU + RAM are abstracted as:&lt;br /&gt;
* processes&lt;br /&gt;
* threads&lt;br /&gt;
&lt;br /&gt;
A process may have multiple threads.  A thread shares memory with its processes.&lt;br /&gt;
&lt;br /&gt;
* A process is an exclusive allocation of CPU and RAM.&lt;br /&gt;
* A thread is a non-exclusive allocation of RAM within a process,&lt;br /&gt;
but is an exclusive allocation of CPU.&lt;br /&gt;
* One or more threads constitute a process.&lt;br /&gt;
&lt;br /&gt;
Another way to talk about processes is in terms of address spaces and&lt;br /&gt;
execution context: &lt;br /&gt;
* An address space is just a virtual version of RAM. It may&lt;br /&gt;
be instantiated in physical memory, it may not be. Its a set of addresses you&lt;br /&gt;
can call your own. &lt;br /&gt;
* Execution context is CPU state (Registers, processor status&lt;br /&gt;
words, etc.). There&#039;s lots of state surrounding the processor when its running&lt;br /&gt;
a program.  This state can be saved, and then restored later to resume execution&lt;br /&gt;
at a later time.&lt;br /&gt;
&lt;br /&gt;
* A thread is one execution context matched with an address space.&lt;br /&gt;
* A process is one or more execution contexts plus an address space.&lt;br /&gt;
* A single-threaded process has one execution context, and one address space.&lt;br /&gt;
* A multithreaded process has multiple execution contexts, and one address space.&lt;br /&gt;
&lt;br /&gt;
The concept of multiple address spaces is somewhat new in modern computing.&lt;br /&gt;
However, if you go back to the old days of MS-DOS, there was only one address space, the&lt;br /&gt;
physical address space.  We used to have things like TSRs, a 640kb limit,&lt;br /&gt;
etc.  There was no virtualization of memory.  In order to run at the same&lt;br /&gt;
time, they had to co-exist in the physical memory address space.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have multiple address spaces, you don&#039;t have processes and threads.&lt;br /&gt;
At best, you have threads, sharing the one address space you have.&lt;br /&gt;
&lt;br /&gt;
Historically, threads were abstracted differently than now: These are&lt;br /&gt;
capitalized here to differentiate them from the newer terms: FORK, JOIN, QUIT.&lt;br /&gt;
&lt;br /&gt;
Why FORK?  Think of a fork in the road.  You&#039;re going along, then things split.&lt;br /&gt;
A FORK is supposed to represent that split.&lt;br /&gt;
&lt;br /&gt;
By FORKing, the main thing to note is that you&#039;re creating two execution&lt;br /&gt;
contexts, that may be sharing memory. The execution may start at the same&lt;br /&gt;
place, but may be diverging. How do you stop creating more and more and more&lt;br /&gt;
of these, to bring them back under control or stop them? That&#039;s the JOIN&lt;br /&gt;
operation. Each thread tracks how many threads are running, if you JOIN and&lt;br /&gt;
you&#039;re not the last one running, you just go away, otherwise you need to&lt;br /&gt;
synchronize back into the main thread.&lt;br /&gt;
&lt;br /&gt;
What&#039;s QUIT? QUIT stops the the whole program -- all execution. It will cut&lt;br /&gt;
all threads off, even if the thread is one of the branches and not the main&lt;br /&gt;
thread.&lt;br /&gt;
&lt;br /&gt;
This was one of the earliest ways to abstract multiple execution context.&lt;br /&gt;
&lt;br /&gt;
What if, when you did the fork, you made a copy of the entire process? There&lt;br /&gt;
are now two separate instances of the program, with the same state? The&lt;br /&gt;
difference here, is if you quit one, the other will stay around -- but the&lt;br /&gt;
difference is more profound: they&#039;re not sharing the same address space (nor&lt;br /&gt;
execution context). This is the Unix model of processes.&lt;br /&gt;
&lt;br /&gt;
In the Unix process model the system starts with only one process: init. It&lt;br /&gt;
starts running, then it creates a copy of itself with fork, then another, etc.&lt;br /&gt;
&lt;br /&gt;
[[Image:Comp3000-process-tree.png]]&lt;br /&gt;
&lt;br /&gt;
In this diagram, What is the value of &#039;&#039;x?&#039;&#039; on the bottom-left-most branch?&lt;br /&gt;
&#039;&#039;x&#039;&#039; is 5 in the Unix process model. However, if this was multithreaded,&lt;br /&gt;
&#039;&#039;x&#039;&#039; could be 7 or 5, depending on how fast the threads are running. It might&lt;br /&gt;
be 5 if the thread asking for the value of &#039;&#039;x&#039;&#039; runs before the thread setting &#039;&#039;x&#039;&#039;&lt;br /&gt;
to 7. This is known as a race condition, &lt;br /&gt;
because we don&#039;t know which thread will run or finish first.&lt;br /&gt;
&lt;br /&gt;
In Unix, they decided to make it easy and have different processes. These&lt;br /&gt;
processes can&#039;t change the state of their parents or children. &lt;br /&gt;
To share a value, you have to set the value before forking.  (Or through other means)&lt;br /&gt;
&lt;br /&gt;
There&#039;s a small glitch with what we&#039;ve said so far about Unix processes: That&lt;br /&gt;
they have exactly the same state when you fork. If this was true, they&#039;d&lt;br /&gt;
always do the same thing.  How do they know that they&#039;re different?&lt;br /&gt;
&lt;br /&gt;
Turns out that Unix fork is very simple, yet it helps with this.&lt;br /&gt;
The idiom you&#039;ll usually see is:&lt;br /&gt;
  &lt;br /&gt;
 pid = fork();&lt;br /&gt;
  &lt;br /&gt;
fork takes no arguments.&lt;br /&gt;
&lt;br /&gt;
When you fork, the result of fork is the pid (process ID) of the new process,&lt;br /&gt;
or 0 if you&#039;re the child.&lt;br /&gt;
&lt;br /&gt;
The tree of processes effectively becomes a family tree. (However, with some&lt;br /&gt;
bizarre genealogy that we&#039;ll see later)&lt;br /&gt;
&lt;br /&gt;
What you usually do is check the value of pid, and if its 0, do one thing,&lt;br /&gt;
otherwise do something else. If pid is nonzero, it is the pid of the child&lt;br /&gt;
process we just created by forking. You usually use this to track your child.&lt;br /&gt;
The classic use of fork is to create disposable children that do a specific&lt;br /&gt;
task for a short while, then go away.&lt;br /&gt;
&lt;br /&gt;
The nice thing about this model is that it keeps things separate. You don&#039;t&lt;br /&gt;
need to worry about what the child is doing. If you want to communicate, you&lt;br /&gt;
have to explicitly set up to do this. There are some standard ways of doing&lt;br /&gt;
that communication.  We&#039;ll look at these later too.&lt;br /&gt;
&lt;br /&gt;
So now we know how to make new processes? How do we do something different? In&lt;br /&gt;
principle we don&#039;t need anything else. We could open a file, read new code,&lt;br /&gt;
then jump to the new code.  However, we have the idea of exec(). &lt;br /&gt;
Exec  replaces the running program with the specified program, but preserves&lt;br /&gt;
the pid.&lt;br /&gt;
&lt;br /&gt;
In Unix, to start a new program you usually fork() then you exec() the desired&lt;br /&gt;
program on the child.  If you don&#039;t fork() first, then exec() will kill the&lt;br /&gt;
original process, replacing it with the program you called exec on.&lt;br /&gt;
&lt;br /&gt;
Exec causes the kernel to throw away the old address space, and give a new&lt;br /&gt;
address space, with the new binary.  The pid stays the same though.&lt;br /&gt;
&lt;br /&gt;
The Windows equivalent is CreateProcess()&lt;br /&gt;
&lt;br /&gt;
CreateProcess() takes lots of arguments about how to create the new&lt;br /&gt;
process (what to load, permissions, etc).  Fork takes none.  With fork(), &lt;br /&gt;
you can set things up yourself, and most of the settings will carry over to&lt;br /&gt;
the new program. (Including open files).  Note how different these two are.&lt;br /&gt;
&lt;br /&gt;
In Unix, you have the building blocks to do things, and you have to put them&lt;br /&gt;
together yourself. In Windows, you have the single API call to do them all at&lt;br /&gt;
once. Neither is strictly right or wrong.&lt;br /&gt;
&lt;br /&gt;
On older systems, when a big process was forked, everything was copied. On&lt;br /&gt;
newer systems, fork doesn&#039;t necessarily copy everything. With virtual memory&lt;br /&gt;
you can share much of the memory between two processes.&lt;br /&gt;
&lt;br /&gt;
In older APIs there was vfork() - suspend parent, fork, exec, then let the&lt;br /&gt;
parent and child both start to go again.  This idea avoided the copying when&lt;br /&gt;
the first thing you were going to do was exec.&lt;br /&gt;
&lt;br /&gt;
The basic idea to make this efficient is that the descriptions of the virtual&lt;br /&gt;
memory address spaces don&#039;t have to be mutually exclusive.  You could have&lt;br /&gt;
10 programs sharing portions of their address space -- such as the read-only&lt;br /&gt;
portions like the program code, but not the read-write portions.&lt;br /&gt;
&lt;br /&gt;
What if you didn&#039;t want to do an exec after forking? A classic one is a&lt;br /&gt;
daemon. One listening on the network for an incoming connection. When that&lt;br /&gt;
incoming request comes in, the main program can deal with the request, but it&lt;br /&gt;
would also have to keep checking for more requests at the same time. Instead,&lt;br /&gt;
in Unix the typically idiom is to fork off a child to process that connection,&lt;br /&gt;
and then go back and wait for more.&lt;br /&gt;
&lt;br /&gt;
You can have shared memory.  But the default model for processes is&lt;br /&gt;
that nothing is shared, but the threading model is everything is shared.&lt;br /&gt;
For threads you have to implement protections, but for processes, you have to&lt;br /&gt;
opt-in to share.&lt;br /&gt;
&lt;br /&gt;
Processes win out on reliability: fewer chances for errors.  You control&lt;br /&gt;
exactly what state is shared.&lt;br /&gt;
&lt;br /&gt;
Another thing we&#039;ll talk about later regarding threads versus processes is&lt;br /&gt;
how does this play on multiple cores?  This depends on the implementation,&lt;br /&gt;
and sometimes is a little tricky.&lt;br /&gt;
&lt;br /&gt;
Chapter two is talking about the model presented to the programmer.  An API&lt;br /&gt;
for your processes and threads to talk to the world.&lt;br /&gt;
&lt;br /&gt;
This course is fundamentally about how these things are implemented. Its&lt;br /&gt;
useful to know about these tricks, so that you know how the computer is used.&lt;br /&gt;
It turns out the same tricks are useful in lots of other circumstances. Such&lt;br /&gt;
as concurrency - when you have to deal with an application that has to deal&lt;br /&gt;
with this - which happens to be most applications. You&#039;ll learn this because&lt;br /&gt;
the OS guys did this first.&lt;br /&gt;
&lt;br /&gt;
==== Graphics ====&lt;br /&gt;
&lt;br /&gt;
This part of the lecture should help you with the lab.  Its about graphics.&lt;br /&gt;
&lt;br /&gt;
We&#039;ve talked about some standard abstractions so far: files, processes, threads.&lt;br /&gt;
&lt;br /&gt;
However, the thing you really interact with is the keyboard, mouse, and&lt;br /&gt;
display. In the standard Unix model, these are not a part of the operating&lt;br /&gt;
system. They&#039;re implemented in an application.&lt;br /&gt;
&lt;br /&gt;
The Unix philosophy is that if you don&#039;t have to put it in the kernel,&lt;br /&gt;
don&#039;t put it there, or if you do, make it interchangeable.&lt;br /&gt;
&lt;br /&gt;
The standard way to do graphics in Unix is X-Windows, or X for short.&lt;br /&gt;
Before X there was the W system.  There was a Y system at one point,&lt;br /&gt;
as well as Sun NeWs.&lt;br /&gt;
&lt;br /&gt;
There was also a system called Display Postscript.  Postscript is a&lt;br /&gt;
fully fledged programming language.  Originally used for printers.&lt;br /&gt;
It was developed for laser printers, by a little company called Adobe.&lt;br /&gt;
When laser printers came out, they had really high resolutions.  It was&lt;br /&gt;
hard to get the data necessary to print a page to the printer fast&lt;br /&gt;
enough...  So postscript programs were sent to the printer.  In the&lt;br /&gt;
early days of the MacIntosh, the processor in the printer was more powerful&lt;br /&gt;
than the processor in the computer.  Postscript is a funny little language.&lt;br /&gt;
Its a post-fix operator language.  Instead of saying things like &amp;quot;4+5&amp;quot; you&lt;br /&gt;
say &amp;quot;4 5 +&amp;quot; -- you push them onto the stack, then run an operator on them.&lt;br /&gt;
The same with function calls.&lt;br /&gt;
&lt;br /&gt;
In the 80s, there were many competing technologies for how to do graphics&lt;br /&gt;
in the Unix world.  X won.  But Display Postscript also kind of won,&lt;br /&gt;
because Macs use Display PDF in a system called Quartz, which was&lt;br /&gt;
created as a successor to Display Postscript. Because Postscript was linear,&lt;br /&gt;
it was hard to parallelize.  PDF is easier to parallelize.&lt;br /&gt;
&lt;br /&gt;
NeXT was the one that used Display Postscript first... NeXT was founded by&lt;br /&gt;
Steve Jobs. OS X is Unix with Display PDF... And you can run X-Windows on top&lt;br /&gt;
of that.&lt;br /&gt;
&lt;br /&gt;
X-Windows lets you open windows on remote computers. The way you create a&lt;br /&gt;
window on your local computer is the same way that you open a window on a&lt;br /&gt;
remote computer, 1000s of miles away. X is based on something called the X&lt;br /&gt;
Window Protocol. It just happens to work locally as well (with some&lt;br /&gt;
optimization like shared memory), but the messages were designed to work well&lt;br /&gt;
over ethernet.&lt;br /&gt;
&lt;br /&gt;
This was created by folks that wanted to talk to hundreds of computers, such&lt;br /&gt;
as the supercomputer in another room... but they wanted to see the windows&lt;br /&gt;
on their own computer.&lt;br /&gt;
&lt;br /&gt;
Consider what you have to do to see a remote window in Windows. You fire up&lt;br /&gt;
Remote Desktop Client, and you get the whole desktop remotely. If you want to&lt;br /&gt;
do 10 computers, you end up with 10 windows with 10 desktops and 10 start&lt;br /&gt;
buttons. This difference is a result of X-Windows being designed for networks&lt;br /&gt;
and Windows being designed for one computer.&lt;br /&gt;
&lt;br /&gt;
The terminology for X-Windows is a bit backwards from what we&#039;re used to: The&lt;br /&gt;
server is what we mostly think of of as a client.  The server is what controls&lt;br /&gt;
access to the display: it runs where your display is to control your display,&lt;br /&gt;
mouse, keyboard... And to display a window, remotely or locally, you run a&lt;br /&gt;
program known as a client in X-Windows which connects over the network to&lt;br /&gt;
display a window on your X-Windows server.&lt;br /&gt;
&lt;br /&gt;
A funny thing about X is it took the abstraction to an extreme. The people who&lt;br /&gt;
created X-Windows didn&#039;t know anything about usability or graphics or art. The&lt;br /&gt;
original X-Windows tools were created by regular programmers. Technically&lt;br /&gt;
underneath its very nice.. But they knew they didn&#039;t know, so they made it so&lt;br /&gt;
the user could decide what it should look like themselves. So that you can&lt;br /&gt;
just switch out a few programs and things keep on working.&lt;br /&gt;
&lt;br /&gt;
This means that when you do things like moving your mouse to a window -- what&lt;br /&gt;
happens? Do you take focus or not? This is something known as click to focus.&lt;br /&gt;
In older X Systems, you could just point your mouse there, and focus followed.&lt;br /&gt;
This is potentially very efficient, but also very confusing if you&#039;re not used&lt;br /&gt;
to it... Or how do you handle key sequences, or minimize? Who decides how to&lt;br /&gt;
do this all? They had the idea of something called a Window Manager. This goes&lt;br /&gt;
back to X Servers providing the technical minimums so that you&#039;re not limited&lt;br /&gt;
to one behaviour. The Window Manager is just another X client, with some&lt;br /&gt;
special privileges so it can run anywhere. It could run 1000s of miles away.&lt;br /&gt;
&lt;br /&gt;
This is why on Linux there&#039;s Gnome, KDE, etc. There&#039;s Motif, GTK, QT, IceWM,&lt;br /&gt;
Aferstep, Blackbox, Sawfish, fvwm, twm. Etc. Other graphical toolkits too,&lt;br /&gt;
abstracted away. These choices are all available there because the X-Windows&lt;br /&gt;
people left it very open by not making the choice for us. This does make&lt;br /&gt;
things a little confusing at times, though, because each application could&lt;br /&gt;
have different assumptions.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Introduction&amp;diff=1454</id>
		<title>Introduction</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Introduction&amp;diff=1454"/>
		<updated>2007-09-18T01:21:33Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These notes have not yet been reviewed for correctness.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
[[Class Outline#Introduction|Last class]], we began talking about turning the machine we have into the machine we want.&lt;br /&gt;
&lt;br /&gt;
What are some properties of the machine that we want?&lt;br /&gt;
&lt;br /&gt;
- usable / accessible&lt;br /&gt;
- stable / reliable&lt;br /&gt;
- functional (access to underlying resources)&lt;br /&gt;
- efficient&lt;br /&gt;
- customizable&lt;br /&gt;
- secure&lt;br /&gt;
- multitasking&lt;br /&gt;
- portability&lt;br /&gt;
&lt;br /&gt;
If you have a computer that doesn&#039;t let you access the hardware, eg. a&lt;br /&gt;
keyboard or video camera, it isn&#039;t very functional. Multitasking is slightly different than efficiency. &lt;br /&gt;
For portability, do you really want to have to rewrite applications to support&lt;br /&gt;
slight variations in the hardware such as different size hard disks and different&lt;br /&gt;
amounts of RAM?&lt;br /&gt;
&lt;br /&gt;
Operating systems don&#039;t do all of these perfectly, but they tend to do a lot&lt;br /&gt;
of these at least acceptably.&lt;br /&gt;
&lt;br /&gt;
If you look at the introduction to the text book, it talks about various&lt;br /&gt;
types of operatings systems.  Some of the operating systems we know about are:&lt;br /&gt;
Linux, Windows, MacOSX, VXWorks, QNX, MS-DOS, Soliaris/xBSD, OS/2, BeOS, VMS,&lt;br /&gt;
MVS, OS/370, AIX, etc.&lt;br /&gt;
&lt;br /&gt;
Linux isn&#039;t a variety of different operating systems to the same degree as the &lt;br /&gt;
different versions of windows, as most linuxes share some components, whereas windows tends not to.&lt;br /&gt;
&lt;br /&gt;
Of the list above, most of them are modern operating systems, except MS-DOS.&lt;br /&gt;
To be a &amp;quot;modern&amp;quot; OS, there are two major qualities:  Does it have protected&lt;br /&gt;
memory, and does it have pre-emptive multitasking?&lt;br /&gt;
&lt;br /&gt;
=== Protected Memory ===&lt;br /&gt;
&lt;br /&gt;
What is protected memory?  &lt;br /&gt;
&lt;br /&gt;
Student: A situation where each program and the operating system has its own memory, and the OS prevents&lt;br /&gt;
other programs from writing to another program&#039;s memory.  &lt;br /&gt;
&lt;br /&gt;
Dr. Somayaji:  Access mechanisms to avoid having one program overwrite another program&#039;s memory.&lt;br /&gt;
&lt;br /&gt;
This lets you have a situation where if one program crashes, you can just restart it.  Damage due to&lt;br /&gt;
memory overwrites is limited to one program.&lt;br /&gt;
&lt;br /&gt;
=== Preemptive Multi-tasking ===&lt;br /&gt;
&lt;br /&gt;
A way to have more than one program run at a time. Older machines were known&lt;br /&gt;
as batch machines and their operating systems were batch operating systems.&lt;br /&gt;
These ran tasks that took a long time to run. These were queued up, and run&lt;br /&gt;
one at a time in sequence.  These were typically things such things as payroll and  accounts receivable.&lt;br /&gt;
Usually these would be run overnight and the output, either on magnetic tape or a&lt;br /&gt;
stack of printouts which would be returned to the user in the morning.&lt;br /&gt;
&lt;br /&gt;
Preemptive multitasking - the OS enforces time sharing.&lt;br /&gt;
Co-operative multitasking  - each program lets others run.&lt;br /&gt;
&lt;br /&gt;
If you look at MS-DOS, there are batch files.  These are just a sequence of&lt;br /&gt;
commands to run.  It runs them and then returns when done.  &lt;br /&gt;
&lt;br /&gt;
If you want to run a GUI, however, a batch system is unlikely to be what you want, as&lt;br /&gt;
a GUI environment tends to be interactive. &lt;br /&gt;
&lt;br /&gt;
With the big iron in the old days they had big computers that would be sitting&lt;br /&gt;
mostly idle, except when running the batch jobs.  The idea of time sharing came along around then.&lt;br /&gt;
&lt;br /&gt;
=== Structure of a computer ===&lt;br /&gt;
&lt;br /&gt;
[[Image:Stored_program_architecture_1.png]]&lt;br /&gt;
Stored Program architecture&lt;br /&gt;
&lt;br /&gt;
The stored program architecture with today&#039;s computers is a bit of a fiction.&lt;br /&gt;
&lt;br /&gt;
The things the microprocessor does are significantly faster than the RAM&lt;br /&gt;
storage.  Modern computers have to wait for data from RAM. However this time is&lt;br /&gt;
dwarfed by the time spent waiting for I/O.  This is because I/O devices tend to be mechanical:&lt;br /&gt;
printers, hard disks, people at keyboards.&lt;br /&gt;
&lt;br /&gt;
This helped cause the idea &amp;quot;what if we had multiple users and let them share the CPU&amp;quot; to come&lt;br /&gt;
about. This is time-sharing. On modern computers, we do this too, but instead&lt;br /&gt;
of sharing with multiple users we run multiple programs for a single user --&lt;br /&gt;
multi-tasking.&lt;br /&gt;
&lt;br /&gt;
In older systems such as Win3.1 and MacOS 9, this was co-operative multi-tasking.&lt;br /&gt;
When things started running, they&#039;d hog the CPU until they decided they were&lt;br /&gt;
ready to give up the CPU.  &lt;br /&gt;
&lt;br /&gt;
There used to be a great feature in the Mac in the old days where if you held&lt;br /&gt;
down the mouse button, no networking would happen.  This was because the&lt;br /&gt;
program running at the time was hogging the CPU when the mouse button was pressed.&lt;br /&gt;
In pre-emptive multitasking, you get booted out periodically so that the&lt;br /&gt;
system can spend time paying attention to the network, to do animations, or&lt;br /&gt;
to let other applications run.  It spends a millisecond here, a millisecond there,&lt;br /&gt;
etc.  Instead of actually running simultaneously, they&#039;re periodically running, but &lt;br /&gt;
they seem to run simultaneously to the end-user.&lt;br /&gt;
&lt;br /&gt;
Sometimes you have 2 or more CPUs, but you have more than 2 things going on... &lt;br /&gt;
&lt;br /&gt;
=== Processes and the Kernel ===&lt;br /&gt;
&lt;br /&gt;
Processes are fundamentally the things that get multitasked and protected. It&lt;br /&gt;
is the abstraction of a running program. This is what makes an operating&lt;br /&gt;
system modern. In the old days, you had one memory space and the OS and its&lt;br /&gt;
applications were all sharing the CPU and memory. Now, with a process model,&lt;br /&gt;
there are barriers all over the place, and more importantly, something/someone&lt;br /&gt;
in charge governing the process. It&#039;s not a free-for all, it has a dictator,&lt;br /&gt;
and its name is the kernel.&lt;br /&gt;
&lt;br /&gt;
Kernel as in the center piece.&lt;br /&gt;
&lt;br /&gt;
Question to class: How many people have heard of the term Microkernel? &lt;br /&gt;
Not many hands.&lt;br /&gt;
&lt;br /&gt;
There are various terms that modify the term kernel such as monolithic kernel, microkernel,&lt;br /&gt;
picokernel, etc. These specify how much stuff is in the microkernel.&lt;br /&gt;
The idea is that the more code is in the kernel, the faster it goes, but&lt;br /&gt;
conversely, that the more code there is the risk of crashing is higher.&lt;br /&gt;
&lt;br /&gt;
All of the problem code goes into processes, as they can be restarted, and kept&lt;br /&gt;
out of the kernel.&lt;br /&gt;
&lt;br /&gt;
The debate about what is faster is not fully settled for technical and philosophical reasons.&lt;br /&gt;
Almost all operating systems on the list above are big kernels, not small ones.&lt;br /&gt;
&lt;br /&gt;
So if that&#039;s what a kernel is, how does a program fit into that?&lt;br /&gt;
If there&#039;s one program to rule them all, where do processes fit in?&lt;br /&gt;
The kernel decides who gets to run, that implements a priority scheme.&lt;br /&gt;
&lt;br /&gt;
Student:  &amp;quot;It got there first.  You start the computer, then the kernel gets in.  Everything has to&lt;br /&gt;
talk to it or it doesn&#039;t run...&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It gets to set the rules... that&#039;s sort of it...  &lt;br /&gt;
In unix, there&#039;s the idea of the init process.  It is first to run, and has&lt;br /&gt;
special responsibilities.  It is run using a regular binary, at system boot,&lt;br /&gt;
by the kernel.  This still doesn&#039;t tell us how the kernel keeps control of it.&lt;br /&gt;
&lt;br /&gt;
The kernel often keeps control by getting the hardware to help.  By loading first,&lt;br /&gt;
the kernel can setup the CPU and memory so that it has control. This&lt;br /&gt;
type of hardware assistance is generally available to the first code to&lt;br /&gt;
request it.&lt;br /&gt;
&lt;br /&gt;
=== Interrupts ===&lt;br /&gt;
&lt;br /&gt;
Interrupts -- what are they?  It&#039;s an alert to say something has to be done now.&lt;br /&gt;
&lt;br /&gt;
A CPU is running the programs, until something happens, like someone pressing&lt;br /&gt;
a key or a network packet arrives.  So an I/O device flags an interrupt.  The CPU&lt;br /&gt;
now has to stop and pay attention&lt;br /&gt;
&lt;br /&gt;
An interrupt is just a mechanism to allow the CPU to change contexts, to switch&lt;br /&gt;
from running one bit of code to another.  There&#039;s a standard set of interrupts&lt;br /&gt;
defined by the hardware.  Associated with each interrupt there&#039;s a bit of code.&lt;br /&gt;
When one interrupts happens, run its code, when another happens, run another.  &lt;br /&gt;
For example, for the keyboard, there&#039;s a routine to read a key from the keyboard, store it in&lt;br /&gt;
a buffer so its not overwritten when the next key is pressed, then returns.&lt;br /&gt;
&lt;br /&gt;
{|align=&amp;quot;right&amp;quot;&lt;br /&gt;
|[[Image:Stored_program_architecture_2.png]]&lt;br /&gt;
|}&lt;br /&gt;
Think of an interrupt as a little kid pulling at your pant leg.  It wants your attention now.&lt;br /&gt;
&lt;br /&gt;
The OS controls interrupts to control the CPU (and also what happens with RAM).&lt;br /&gt;
&lt;br /&gt;
Wait a second?  If the kernel can only control interrupts, how can it keep&lt;br /&gt;
general control if no interrupts happen?  The clock IO device!  It throws&lt;br /&gt;
interrupts too.&lt;br /&gt;
&lt;br /&gt;
As a part of the boot sequence, the kernel programs the clock to wake the&lt;br /&gt;
operating system up every, say, 100th of a second.  Call me!  So the OS can&lt;br /&gt;
then keep running and perform its tasks as it needs:&lt;br /&gt;
&amp;quot;Is everyone behaving nicely?  Do I need to kill anyone?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Virtual Memory ===&lt;br /&gt;
&lt;br /&gt;
A slight fly in the ointment: If you are a program, and want to take control,&lt;br /&gt;
how do you mount a rebellion?  You overwrite the interrupt table! This is&lt;br /&gt;
where protected memory comes in.  It prevents a regular program from doing this.&lt;br /&gt;
As a regular process, you often can&#039;t even see the interrupt table.&lt;br /&gt;
&lt;br /&gt;
How is that possible? Many schemes have been proposed for doing protected&lt;br /&gt;
memory.  Some variants will be spoken about, but the most widespread method &lt;br /&gt;
is something known as virtual memory.  Often tied into the concept of&lt;br /&gt;
virtual memory is the ability to use disk for memory too.&lt;br /&gt;
&lt;br /&gt;
The fundamental idea is that the address you think your instruction or variable is at in memory is&lt;br /&gt;
fictional/virtual.  Say you want to load from address 2000 and load it into a register, and you&lt;br /&gt;
have another program that want to do the same thing, are they doing the same thing?&lt;br /&gt;
Nope!  They have nothing to do with each other in a virtual memory model.&lt;br /&gt;
Both programs live in their own virtual worlds, and can&#039;t see each other.&lt;br /&gt;
The kernel, with the help of a little piece of hardware called the MMU &lt;br /&gt;
is able to give each process its own virtual view of memory.  It decides&lt;br /&gt;
how that&#039;s going to map to real memory as it sees it.&lt;br /&gt;
&lt;br /&gt;
So the kernel controls interrupts to control IO and it controls memory.&lt;br /&gt;
These are the two key controls.  If a kernel can&#039;t control these, it&lt;br /&gt;
can&#039;t properly provide protections (&amp;quot;It can&#039;t stop the rebellion&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Last class we talked about hypervisors.  The whole idea there is that the&lt;br /&gt;
kernel thinks that it has control of the interrupt table and the real&lt;br /&gt;
MMU are actually virtual ones, provided by the Hypervisor.&lt;br /&gt;
&lt;br /&gt;
So you can now run windows inside a window on Linux, OSX, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The difference between the various versions of windows:&lt;br /&gt;
&lt;br /&gt;
- Windows 95, Windows 98 implemented these ideas for some programs, but not all, and&lt;br /&gt;
could get around it easily.&lt;br /&gt;
- Windows 3.1 didn&#039;t have these.&lt;br /&gt;
- Windows XP, Vista are modern.&lt;br /&gt;
&lt;br /&gt;
There&#039;s one small problem with Vista and XP, however.  &lt;br /&gt;
This has to do with the nature of the processes:&lt;br /&gt;
&lt;br /&gt;
{|align=&amp;quot;right&amp;quot;&lt;br /&gt;
|[[Image:Processes_and_kernel.png]]&lt;br /&gt;
|}&lt;br /&gt;
To upgrade things, the kernel trusts some programs/users to allow them to&lt;br /&gt;
upgrade.  In windows, you tend to run as the user admin.  This means you&#039;re&lt;br /&gt;
running the equivalent of the unix root command.  The kernel listens to you&lt;br /&gt;
and does just about anything you want, including install programs. &lt;br /&gt;
&lt;br /&gt;
Say that cute Christmas animation.. which happens to install a keylogger&lt;br /&gt;
to send all your keystrokes to the other side of the world, so they can&lt;br /&gt;
log into your bank account.&lt;br /&gt;
&lt;br /&gt;
In unix, there&#039;s the concept of root users and non-root users.  Root can&lt;br /&gt;
ask for almost anything to be done, including change its code.  If you can&lt;br /&gt;
tell the kernel to load new code, you can pretty much do anything. As&lt;br /&gt;
an unpriviledged user, the kernel/OS say no.&lt;br /&gt;
&lt;br /&gt;
When people make fun of windows being insecure, its not a fundamental flaw&lt;br /&gt;
with the design of windows -- its a little broken, overly complex in some ways --&lt;br /&gt;
but certain design choices along the way in the name of usability, such as&lt;br /&gt;
running as admin users so that users don&#039;t need to be asked to do something&lt;br /&gt;
special to change settings, install software, upgrade, etc.&lt;br /&gt;
This is why we have the current spyware problem.&lt;br /&gt;
&lt;br /&gt;
Vista changes this slightly with the UAC (User Access Control) which runs&lt;br /&gt;
a regular user with full priviledges, but asks you whenever priviledged&lt;br /&gt;
operations need doing -- Yes/No.  And you just click on it.  But users&lt;br /&gt;
still click yes. &lt;br /&gt;
&lt;br /&gt;
And now there&#039;s easy ways to turn off UAC completely.  We&#039;ll talk about this more&lt;br /&gt;
later when we talk about security.&lt;br /&gt;
&lt;br /&gt;
=== System Calls ===&lt;br /&gt;
&lt;br /&gt;
How do you talk to a kernel?&lt;br /&gt;
&lt;br /&gt;
It&#039;s the dictator and you&#039;re a supplicant.  How do you make a timely request&lt;br /&gt;
to the kernel to ask it to please do something?  System calls!&lt;br /&gt;
&lt;br /&gt;
A system call is a standard mechanism for an application to talk to a kernel.&lt;br /&gt;
&lt;br /&gt;
A system call is NOT a function call.  In your APIs and the like, it may look like&lt;br /&gt;
a function call, may be wrapped in one.. But in implementation, they are very&lt;br /&gt;
different.&lt;br /&gt;
&lt;br /&gt;
In order for the kernel to be in control, it has to run with special&lt;br /&gt;
privileges and not give these to the user programs. There are various schemes,&lt;br /&gt;
but the common one is a 1-bit option: User mode, or supervisor mode. User mode&lt;br /&gt;
means that running as a regular program, you can&#039;t talk to the IO/interrupt&lt;br /&gt;
vectors or talk tot he MMU, but you can run instructions and access your own&lt;br /&gt;
memory. When you switch to supervisor mode, then everything is accessible. The&lt;br /&gt;
kernel runs in supervisor mode.&lt;br /&gt;
&lt;br /&gt;
So if you&#039;re cut off and can&#039;t see the kernel, how do you send it a message?&lt;br /&gt;
You might be able to write to a special place in memory that the kernel might&lt;br /&gt;
check periodically, but how do you get it to check now?  Normally the kernel&lt;br /&gt;
is invoked by interrupts.. So as a user program, to invoke the kernel,&lt;br /&gt;
you call an interrupt.  There are special instructions, software interrupts,&lt;br /&gt;
that are like a hardware interrupt, but software initiates them.  There are&lt;br /&gt;
interrupt tables just like for hardware.&lt;br /&gt;
&lt;br /&gt;
So the kernel can then look at the memory of the invoking user program when a&lt;br /&gt;
user program calls the system call. Remember, because of the memory&lt;br /&gt;
protections, you can&#039;t just jump into kernel code, so the only way in is via an&lt;br /&gt;
interrupt.&lt;br /&gt;
&lt;br /&gt;
Therefore, system calls cause interrupts to invoke the kernel.&lt;br /&gt;
&lt;br /&gt;
In the process of doing a system call, the system has to do a lot of &#039;paperwork&#039; &lt;br /&gt;
to change context.  System calls are expensive, very expensive.  This is&lt;br /&gt;
one of the things that tends to bound the performance of an operating system.&lt;br /&gt;
&lt;br /&gt;
Modern CPUs are so fast, shouldn&#039;t they be able to switch really fast?&lt;br /&gt;
Turns out the tricks used to make modern CPUs really fast are like those&lt;br /&gt;
used to make muscle cars -- they tend to go really fast in a straight line,&lt;br /&gt;
but when you want to turn, you have to slow down to nearly a stop.  Modern&lt;br /&gt;
CPUs are like that.&lt;br /&gt;
&lt;br /&gt;
Interrupts cause all partial work done in parallel by modern CPUs to be&lt;br /&gt;
thrown out, such as 10-20 or more instructions. The CPU has to fill the&lt;br /&gt;
pipelines and resume. This stuff happens at a level below that of what the&lt;br /&gt;
kernel runs at.  The kernel saves its registers before switching context,&lt;br /&gt;
so that it can resume later.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Using_the_Operating_System&amp;diff=1453</id>
		<title>Using the Operating System</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Using_the_Operating_System&amp;diff=1453"/>
		<updated>2007-09-18T01:02:48Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Lab 1 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These notes are have not yet been reviewed for correctness.&lt;br /&gt;
&lt;br /&gt;
== Lecture 3: Using the Operating System ==&lt;br /&gt;
&lt;br /&gt;
=== Administrative ===&lt;br /&gt;
&lt;br /&gt;
==== Course Notes ====&lt;br /&gt;
The course webpage has been changed to point to this wiki.  Notes for lectures will continue to be posted here.&lt;br /&gt;
&lt;br /&gt;
We still need volunteers to take notes and put them up.  Don&#039;t forget the up to 3% bonus for doing so.&lt;br /&gt;
&lt;br /&gt;
==== Lab 1 ====&lt;br /&gt;
&lt;br /&gt;
Lab 1 will be up soon. Starting tomorrow, there are labs. Please show up.&lt;br /&gt;
&lt;br /&gt;
If you&#039;re clever, you can probably find most of the answers online.  Avoid looking up the answers&lt;br /&gt;
though, because you&#039;ll learn much more than the just answers if you explore&lt;br /&gt;
while using the computer.&lt;br /&gt;
&lt;br /&gt;
You&#039;ll do better on the tests if you do the labs.&lt;br /&gt;
&lt;br /&gt;
The point of this course is to build up a conceptual model of&lt;br /&gt;
how computers work.  This conceptual model is not made up of answers, its made up&lt;br /&gt;
of connections.  You&#039;ll start to make these connections by doing the labs.&lt;br /&gt;
&lt;br /&gt;
The Lab will posted as a PDF.  You can print it out to bring with it with you.&lt;br /&gt;
When you go to hand in the lab, print off your answers on a separate piece of paper.&lt;br /&gt;
Answers will be due in two weeks.&lt;br /&gt;
&lt;br /&gt;
All functioning lab machines are running Debian Linux 4.0 (etch).&lt;br /&gt;
They should be connected to the internet.  They should have a browser on them&lt;br /&gt;
called [http://en.wikipedia.org/wiki/Naming_conflict_between_Debian_and_Mozilla IceWeasel]. &lt;br /&gt;
Its really Mozilla Firefox.  Because Mozilla has trademarked the name Firefox,&lt;br /&gt;
in order to use their name, you have to use exactly their binary distribution.  Debian could&lt;br /&gt;
have had a waiver, but because Debian is about freedom, Debian didn&#039;t want&lt;br /&gt;
the users of Debian to be bound by the terms of the Firefox agreement.&lt;br /&gt;
&lt;br /&gt;
One thing you&#039;ll notice while studying operating systems is that there&#039;s a lot&lt;br /&gt;
of culture.  This is because users get used to a particular way of doing things.&lt;br /&gt;
For example, lots of us are probably used to Windows and how it works.  If&lt;br /&gt;
you changed the fundamentals of how Windows worked, many of us would be unhappy.&lt;br /&gt;
&lt;br /&gt;
Some of the things we&#039;ll be studying are based on decisions made long ago, often &lt;br /&gt;
arbitrarily of for a technical reason was true at the time.&lt;br /&gt;
Even if it was wrong the, or is now wrong, we&#039;re often stuck with it.&lt;br /&gt;
&lt;br /&gt;
We&#039;re going to look at some of the baggage in the operating systems as we progress in this course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the lab, we&#039;ll be given a set of questions in 2 parts.&lt;br /&gt;
&lt;br /&gt;
Part A we should be able to finish during the hour, give or take a few minutes.&lt;br /&gt;
If it takes you 3-4 hours, you&#039;re probably doing something wrong.&lt;br /&gt;
&lt;br /&gt;
Part B will take longer. Will need more research, and a bit more reading. You&lt;br /&gt;
should be working together. If you have trouble finding a buddy, talk to Dr.&lt;br /&gt;
S. Talk to each other to learn. The purpose isn&#039;t just about the right&lt;br /&gt;
answers, but more so about the operating systems.&lt;br /&gt;
&lt;br /&gt;
We&#039;re going to look at:&lt;br /&gt;
&lt;br /&gt;
- Processes, and how Unix deals with them.&lt;br /&gt;
- How are the parts of the system divided up.&lt;br /&gt;
- Dynamic libraries, where they are, where they fit in memory.&lt;br /&gt;
- What are the dependencies.&lt;br /&gt;
- How does the graphical subsystem fit in there.  X-Windows, the classical&lt;br /&gt;
Unix graphical environment.&lt;br /&gt;
- Practice on the command line. (If you&#039;re wondering where these fit in&lt;br /&gt;
in modern graphical environments, read Neal Stephenson&#039;s essay)&lt;br /&gt;
&lt;br /&gt;
Try to finish part A in the lab.  &lt;br /&gt;
Answers due in class 2 weeks from today.&lt;br /&gt;
&lt;br /&gt;
==== Term Paper ====&lt;br /&gt;
&lt;br /&gt;
For the Term Paper... You don&#039;t have to do a pure literature review.  You could&lt;br /&gt;
do an original operating system extension.  There&#039;s one caveat:  You&#039;ve got&lt;br /&gt;
to get what you&#039;re going to do with the professor first.  You have to tell&lt;br /&gt;
him a week before the outline is due Oct 22nd.&lt;br /&gt;
&lt;br /&gt;
What types of things is he thinking of: Say you wanted to implement a new&lt;br /&gt;
file-system? This is inherently more work, because you still have to give a&lt;br /&gt;
nice write-up. The report should still cite other work.&lt;br /&gt;
&lt;br /&gt;
All of us should have started the process by next week, even if its just&lt;br /&gt;
googling for 15 minutes. Just google and see what results come up. IF you&lt;br /&gt;
start now, you&#039;ll have time to pick a topic that you like, instead of the&lt;br /&gt;
first thing that comes along.  Its better to work on something you like,&lt;br /&gt;
instead of stuck reading papers you&#039;re not interested in.&lt;br /&gt;
&lt;br /&gt;
If you want to find good OS papers:&lt;br /&gt;
* USENIX association has a number of systems oriented conferences.&lt;br /&gt;
** OSDI &lt;br /&gt;
** USENIX Annual Technical Conference&lt;br /&gt;
** LISA&lt;br /&gt;
&lt;br /&gt;
=== Using the Operating System ===&lt;br /&gt;
&lt;br /&gt;
Chapter 2 looks at the programming model of an operating system. &lt;br /&gt;
The operating system provides certain abstractions to help programmers work with it.&lt;br /&gt;
&lt;br /&gt;
What are some examples of abstractions?&lt;br /&gt;
&lt;br /&gt;
==== Files ====&lt;br /&gt;
&lt;br /&gt;
A file is a metaphor.  What was the original metaphor?  The manilla&lt;br /&gt;
coloured folder that we put paper in.  Its interesting to note that a file is used&lt;br /&gt;
to hold many pages or documents, but that a computer file is a single document.  Instead, a directory&lt;br /&gt;
holds many files, which are each generally one document. &lt;br /&gt;
The metaphor hasn&#039;t made much sense for a long time, but it is still in use.&lt;br /&gt;
&lt;br /&gt;
What is a file?  &lt;br /&gt;
&lt;br /&gt;
A file is a bytestream you can read and write from.&lt;br /&gt;
&lt;br /&gt;
We also have an abstraction called a byte, 256 possible values, 0-255.&lt;br /&gt;
We as computer scientists think we can represent just about anything&lt;br /&gt;
with these.&lt;br /&gt;
&lt;br /&gt;
; file :  named bytestream(s). &lt;br /&gt;
&lt;br /&gt;
In modern operating systems there are potentially more than &lt;br /&gt;
one bytestream in a file.  When there is more than one bytestream, we&lt;br /&gt;
call this a forked file.&lt;br /&gt;
&lt;br /&gt;
An early operating system that used forked files was OS 9.&lt;br /&gt;
On a traditional system, you get a sequence of bytes&lt;br /&gt;
when you open a file.  In a forked file, when you read it, you get some data,&lt;br /&gt;
but there is also other data hanging around.  We&#039;ll talk about that later.&lt;br /&gt;
&lt;br /&gt;
The standard API calls for a file are:&lt;br /&gt;
&lt;br /&gt;
* open&lt;br /&gt;
* read&lt;br /&gt;
* write&lt;br /&gt;
* close&lt;br /&gt;
* seek&lt;br /&gt;
&lt;br /&gt;
As well as a other operations that one might need to perform on files, such as:&lt;br /&gt;
* truncate&lt;br /&gt;
* append - (seek to end of file and write)&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
Why open and close?  Why can&#039;t we just operate on a filename?  &lt;br /&gt;
Because it (usually) takes a long time to go through the filesystem to find the files.&lt;br /&gt;
Open and close are optimizations -- the abstraction is a stateful interface.&lt;br /&gt;
You start by using open to obtain some sort of &amp;quot;handle&amp;quot; representing the file, &lt;br /&gt;
and pass this &amp;quot;handle&amp;quot; value to read and write.  When you&#039;re done, closing the &lt;br /&gt;
file frees the resources allocated when opening the file.  On most systems&lt;br /&gt;
you can only have a specific number of files open at any given time.&lt;br /&gt;
&lt;br /&gt;
There are some filesystems where open and close don&#039;t do much of anything,&lt;br /&gt;
such as some networked filesystems.&lt;br /&gt;
&lt;br /&gt;
File represent storage, on disks... They&#039;re random access.  If they&#039;re random access&lt;br /&gt;
like RAM, why don&#039;t we access disks like we access RAM?   &lt;br /&gt;
Why couldn&#039;t we just allocate objects as we need them?  We could indeed do this, but&lt;br /&gt;
it turns out that there&#039;s a reason that we don&#039;t generally do this.&lt;br /&gt;
&lt;br /&gt;
The file interface is a procedural interface.  &lt;br /&gt;
&lt;br /&gt;
One nice things about files is they&#039;re a minimal functionality interface.  The concept of&lt;br /&gt;
minimal functionality is a recurring theme you&#039;ll find when we discuss  filesystems.  &lt;br /&gt;
&lt;br /&gt;
The abstraction used to interface the filesystem&lt;br /&gt;
shouldn&#039;t prohibit you from creating particular forms of applications.  If we chose &lt;br /&gt;
to use an object model, we&#039;d be implying you don&#039;t want to give arbitrary access to the&lt;br /&gt;
data on disk, as objects tend to encapsulate their data.&lt;br /&gt;
&lt;br /&gt;
The abstraction listed above is the minimal abstraction for efficiently&lt;br /&gt;
managing persistent storage (disks).&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t necessarily mean this is the most absolutely minimal abstraction.  &lt;br /&gt;
An even more minimal abstraction would be&lt;br /&gt;
to just treat storage devices as a bunch of fixed size blocks.  However, that&#039;s getting too low level,&lt;br /&gt;
because now all programs have to worry about where they put files.&lt;br /&gt;
&lt;br /&gt;
Because the files abstract model is reasonably good, its stuck around for decades.&lt;br /&gt;
&lt;br /&gt;
Fundamentally, though, its a legacy.  Some models of filesystems try to get&lt;br /&gt;
away from it.  Look at the PalmOS - it resisted having files for a long time, but eventually&lt;br /&gt;
gave in to support removable media, but the primary OS and API still don&#039;t support files.  &lt;br /&gt;
Microsoft&#039;s been wanting to get away from the legacy files abstraction too, but &lt;br /&gt;
somehow it doesn&#039;t seem to happen.&lt;br /&gt;
&lt;br /&gt;
==== Processes and Threads ====&lt;br /&gt;
&lt;br /&gt;
There&#039;s lots of other devices, but from an OS level, there are two other big ones:&lt;br /&gt;
CPU and RAM.  These two are generally abstracted with processes.&lt;br /&gt;
The process is the basic abstraction in operating systems for these two,&lt;br /&gt;
but is not the only abstraction.  There are also threads.&lt;br /&gt;
&lt;br /&gt;
CPU + RAM are abstracted as:&lt;br /&gt;
* processes&lt;br /&gt;
* threads&lt;br /&gt;
&lt;br /&gt;
A process may have multiple threads.  A thread shares memory with its processes.&lt;br /&gt;
&lt;br /&gt;
* A process is an exclusive allocation of CPU and RAM.&lt;br /&gt;
* A thread is a non-exclusive allocation of RAM within a process,&lt;br /&gt;
but is an exclusive allocation of CPU.&lt;br /&gt;
* One or more threads constitute a process.&lt;br /&gt;
&lt;br /&gt;
Another way to talk about processes is in terms of address spaces and&lt;br /&gt;
execution context: &lt;br /&gt;
* An address space is just a virtual version of RAM. It may&lt;br /&gt;
be instantiated in physical memory, it may not be. Its a set of addresses you&lt;br /&gt;
can call your own. &lt;br /&gt;
* Execution context is CPU state (Registers, processor status&lt;br /&gt;
words, etc.). There&#039;s lots of state surrounding the processor when its running&lt;br /&gt;
a program.  This state can be saved, and then restored later to resume execution&lt;br /&gt;
at a later time.&lt;br /&gt;
&lt;br /&gt;
* A thread is one execution context matched with an address space.&lt;br /&gt;
* A process is one or more execution contexts plus an address space.&lt;br /&gt;
* A single-threaded process has one execution context, and one address space.&lt;br /&gt;
* A multithreaded process has multiple execution contexts, and one address space.&lt;br /&gt;
&lt;br /&gt;
The concept of multiple address spaces is somewhat new in modern computing.&lt;br /&gt;
However, if you go back to the old days of MS-DOS, there was only one address space, the&lt;br /&gt;
physical address space.  We used to have things like TSRs, a 640kb limit,&lt;br /&gt;
etc.  There was no virtualization of memory.  In order to run at the same&lt;br /&gt;
time, they had to co-exist in the physical memory address space.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have multiple address spaces, you don&#039;t have processes and threads.&lt;br /&gt;
At best, you have threads, sharing the one address space you have.&lt;br /&gt;
&lt;br /&gt;
Historically, threads were abstracted differently than now: These are&lt;br /&gt;
capitalized here to differentiate them from the newer terms: FORK, JOIN, QUIT.&lt;br /&gt;
&lt;br /&gt;
Why FORK?  Think of a fork in the road.  You&#039;re going along, then things split.&lt;br /&gt;
A FORK is supposed to represent that split.&lt;br /&gt;
&lt;br /&gt;
By FORKing, the main thing to note is that you&#039;re creating two execution&lt;br /&gt;
contexts, that may be sharing memory. The execution may start at the same&lt;br /&gt;
place, but may be diverging. How do you stop creating more and more and more&lt;br /&gt;
of these, to bring them back under control or stop them? That&#039;s the JOIN&lt;br /&gt;
operation. Each thread tracks how many threads are running, if you JOIN and&lt;br /&gt;
you&#039;re not the last one running, you just go away, otherwise you need to&lt;br /&gt;
synchronize back into the main thread.&lt;br /&gt;
&lt;br /&gt;
What&#039;s QUIT? QUIT stops the the whole program -- all execution. It will cut&lt;br /&gt;
all threads off, even if the thread is one of the branches and not the main&lt;br /&gt;
thread.&lt;br /&gt;
&lt;br /&gt;
This was one of the earliest ways to abstract multiple execution context.&lt;br /&gt;
&lt;br /&gt;
What if, when you did the fork, you made a copy of the entire process? There&lt;br /&gt;
are now two separate instances of the program, with the same state? The&lt;br /&gt;
difference here, is if you quit one, the other will stay around -- but the&lt;br /&gt;
difference is more profound: they&#039;re not sharing the same address space (nor&lt;br /&gt;
execution context). This is the Unix model of processes.&lt;br /&gt;
&lt;br /&gt;
In the Unix process model the system starts with only one process: init. It&lt;br /&gt;
starts running, then it creates a copy of itself with fork, then another, etc.&lt;br /&gt;
&lt;br /&gt;
[[Image:Comp3000-process-tree.png]]&lt;br /&gt;
&lt;br /&gt;
In this diagram, What is the value of &#039;&#039;x?&#039;&#039; on the bottom-left-most branch?&lt;br /&gt;
&#039;&#039;x&#039;&#039; is 5 in the Unix process model. However, if this was multithreaded,&lt;br /&gt;
&#039;&#039;x&#039;&#039; could be 7 or 5, depending on how fast the threads are running. It might&lt;br /&gt;
be 5 if the thread asking for the value of &#039;&#039;x&#039;&#039; runs before the thread setting &#039;&#039;x&#039;&#039;&lt;br /&gt;
to 7. This is known as a race condition, &lt;br /&gt;
because we don&#039;t know which thread will run or finish first.&lt;br /&gt;
&lt;br /&gt;
In Unix, they decided to make it easy and have different processes. These&lt;br /&gt;
processes can&#039;t change the state of their parents or children. &lt;br /&gt;
To share a value, you have to set the value before forking.  (Or through other means)&lt;br /&gt;
&lt;br /&gt;
There&#039;s a small glitch with what we&#039;ve said so far about Unix processes: That&lt;br /&gt;
they have exactly the same state when you fork. If this was true, they&#039;d&lt;br /&gt;
always do the same thing.  How do they know that they&#039;re different?&lt;br /&gt;
&lt;br /&gt;
Turns out that Unix fork is very simple, yet it helps with this.&lt;br /&gt;
The idiom you&#039;ll usually see is:&lt;br /&gt;
  &lt;br /&gt;
 pid = fork();&lt;br /&gt;
  &lt;br /&gt;
fork takes no arguments.&lt;br /&gt;
&lt;br /&gt;
When you fork, the result of fork is the pid (process ID) of the new process,&lt;br /&gt;
or 0 if you&#039;re the child.&lt;br /&gt;
&lt;br /&gt;
The tree of processes effectively becomes a family tree. (However, with some&lt;br /&gt;
bizarre genealogy that we&#039;ll see later)&lt;br /&gt;
&lt;br /&gt;
What you usually do is check the value of pid, and if its 0, do one thing,&lt;br /&gt;
otherwise do something else. If pid is nonzero, it is the pid of the child&lt;br /&gt;
process we just created by forking. You usually use this to track your child.&lt;br /&gt;
The classic use of fork is to create disposable children that do a specific&lt;br /&gt;
task for a short while, then go away.&lt;br /&gt;
&lt;br /&gt;
The nice thing about this model is that it keeps things separate. You don&#039;t&lt;br /&gt;
need to worry about what the child is doing. If you want to communicate, you&lt;br /&gt;
have to explicitly set up to do this. There are some standard ways of doing&lt;br /&gt;
that communication.  We&#039;ll look at these later too.&lt;br /&gt;
&lt;br /&gt;
So now we know how to make new processes? How do we do something different? In&lt;br /&gt;
principle we don&#039;t need anything else. We could open a file, read new code,&lt;br /&gt;
then jump to the new code.  However, we have the idea of exec(). &lt;br /&gt;
Exec  replaces the running program with the specified program, but preserves&lt;br /&gt;
the pid.&lt;br /&gt;
&lt;br /&gt;
In Unix, to start a new program you usually fork() then you exec() the desired&lt;br /&gt;
program on the child.  If you don&#039;t fork() first, then exec() will kill the&lt;br /&gt;
original process, replacing it with the program you called exec on.&lt;br /&gt;
&lt;br /&gt;
Exec causes the kernel to throw away the old address space, and give a new&lt;br /&gt;
address space, with the new binary.  The pid stays the same though.&lt;br /&gt;
&lt;br /&gt;
The Windows equivalent is CreateProcess()&lt;br /&gt;
&lt;br /&gt;
CreateProcess() takes lots of arguments about how to create the new&lt;br /&gt;
process (what to load, permissions, etc).  Fork takes none.  With fork(), &lt;br /&gt;
you can set things up yourself, and most of the settings will carry over to&lt;br /&gt;
the new program. (Including open files).  Note how different these two are.&lt;br /&gt;
&lt;br /&gt;
In Unix, you have the building blocks to do things, and you have to put them&lt;br /&gt;
together yourself. In Windows, you have the single API call to do them all at&lt;br /&gt;
once. Neither is strictly right or wrong.&lt;br /&gt;
&lt;br /&gt;
On older systems, when a big process was forked, everything was copied. On&lt;br /&gt;
newer systems, fork doesn&#039;t necessarily copy everything. With virtual memory&lt;br /&gt;
you can share much of the memory between two processes.&lt;br /&gt;
&lt;br /&gt;
In older APIs there was vfork() - suspend parent, fork, exec, then let the&lt;br /&gt;
parent and child both start to go again.  This idea avoided the copying when&lt;br /&gt;
the first thing you were going to do was exec.&lt;br /&gt;
&lt;br /&gt;
The basic idea to make this efficient is that the descriptions of the virtual&lt;br /&gt;
memory address spaces don&#039;t have to be mutually exclusive.  You could have&lt;br /&gt;
10 programs sharing portions of their address space -- such as the read-only&lt;br /&gt;
portions like the program code, but not the read-write portions.&lt;br /&gt;
&lt;br /&gt;
What if you didn&#039;t want to do an exec after forking? A classic one is a&lt;br /&gt;
daemon. One listening on the network for an incoming connection. When that&lt;br /&gt;
incoming request comes in, the main program can deal with the request, but it&lt;br /&gt;
would also have to keep checking for more requests at the same time. Instead,&lt;br /&gt;
in Unix the typically idiom is to fork off a child to process that connection,&lt;br /&gt;
and then go back and wait for more.&lt;br /&gt;
&lt;br /&gt;
You can have shared memory.  But the default model for processes is&lt;br /&gt;
that nothing is shared, but the threading model is everything is shared.&lt;br /&gt;
For threads you have to implement protections, but for processes, you have to&lt;br /&gt;
opt-in to share.&lt;br /&gt;
&lt;br /&gt;
Processes win out on reliability: fewer chances for errors.  You control&lt;br /&gt;
exactly what state is shared.&lt;br /&gt;
&lt;br /&gt;
Another thing we&#039;ll talk about later regarding threads versus processes is&lt;br /&gt;
how does this play on multiple cores?  This depends on the implementation,&lt;br /&gt;
and sometimes is a little tricky.&lt;br /&gt;
&lt;br /&gt;
Chapter two is talking about the model presented to the programmer.  An API&lt;br /&gt;
for your processes and threads to talk to the world.&lt;br /&gt;
&lt;br /&gt;
This course is fundamentally about how these things are implemented. Its&lt;br /&gt;
useful to know about these tricks, so that you know how the computer is used.&lt;br /&gt;
It turns out the same tricks are useful in lots of other circumstances. Such&lt;br /&gt;
as concurrency - when you have to deal with an application that has to deal&lt;br /&gt;
with this - which happens to be most applications. You&#039;ll learn this because&lt;br /&gt;
the OS guys did this first.&lt;br /&gt;
&lt;br /&gt;
==== Graphics ====&lt;br /&gt;
&lt;br /&gt;
This part of the lecture should help you with the lab.  Its about graphics.&lt;br /&gt;
&lt;br /&gt;
We&#039;ve talked about some standard abstractions so far: files, processes, threads.&lt;br /&gt;
&lt;br /&gt;
However, the thing you really interact with is the keyboard, mouse, and&lt;br /&gt;
display. In the standard Unix model, these are not a part of the operating&lt;br /&gt;
system. They&#039;re implemented in an application.&lt;br /&gt;
&lt;br /&gt;
The Unix philosophy is that if you don&#039;t have to put it in the kernel,&lt;br /&gt;
don&#039;t put it there, or if you do, make it interchangeable.&lt;br /&gt;
&lt;br /&gt;
The standard way to do graphics in Unix is X-Windows, or X for short.&lt;br /&gt;
Before X there was the W system.  There was a Y system at one point,&lt;br /&gt;
as well as Sun NeWs.&lt;br /&gt;
&lt;br /&gt;
There was also a system called Display Postscript.  Postscript is a&lt;br /&gt;
fully fledged programming language.  Originally used for printers.&lt;br /&gt;
It was developed for laser printers, by a little company called Adobe.&lt;br /&gt;
When laser printers came out, they had really high resolutions.  It was&lt;br /&gt;
hard to get the data necessary to print a page to the printer fast&lt;br /&gt;
enough...  So postscript programs were sent to the printer.  In the&lt;br /&gt;
early days of the MacIntosh, the processor in the printer was more powerful&lt;br /&gt;
than the processor in the computer.  Postscript is a funny little language.&lt;br /&gt;
Its a post-fix operator language.  Instead of saying things like &amp;quot;4+5&amp;quot; you&lt;br /&gt;
say &amp;quot;4 5 +&amp;quot; -- you push them onto the stack, then run an operator on them.&lt;br /&gt;
The same with function calls.&lt;br /&gt;
&lt;br /&gt;
In the 80s, there were many competing technologies for how to do graphics&lt;br /&gt;
in the Unix world.  X won.  But Display Postscript also kind of won,&lt;br /&gt;
because Macs use Display PDF in a system called Quartz, which was&lt;br /&gt;
created as a successor to Display Postscript. Because Postscript was linear,&lt;br /&gt;
it was hard to parallelize.  PDF is easier to parallelize.&lt;br /&gt;
&lt;br /&gt;
NeXT was the one that used Display Postscript first... NeXT was founded by&lt;br /&gt;
Steve Jobs. OS X is Unix with Display PDF... And you can run X-Windows on top&lt;br /&gt;
of that.&lt;br /&gt;
&lt;br /&gt;
X-Windows lets you open windows on remote computers. The way you create a&lt;br /&gt;
window on your local computer is the same way that you open a window on a&lt;br /&gt;
remote computer, 1000s of miles away. X is based on something called the X&lt;br /&gt;
Window Protocol. It just happens to work locally as well (with some&lt;br /&gt;
optimization like shared memory), but the messages were designed to work well&lt;br /&gt;
over ethernet.&lt;br /&gt;
&lt;br /&gt;
This was created by folks that wanted to talk to hundreds of computers, such&lt;br /&gt;
as the supercomputer in another room... but they wanted to see the windows&lt;br /&gt;
on their own computer.&lt;br /&gt;
&lt;br /&gt;
Consider what you have to do to see a remote window in Windows. You fire up&lt;br /&gt;
Remote Desktop Client, and you get the whole desktop remotely. If you want to&lt;br /&gt;
do 10 computers, you end up with 10 windows with 10 desktops and 10 start&lt;br /&gt;
buttons. This difference is a result of X-Windows being designed for networks&lt;br /&gt;
and Windows being designed for one computer.&lt;br /&gt;
&lt;br /&gt;
The terminology for X-Windows is a bit backwards from what we&#039;re used to: The&lt;br /&gt;
server is what we mostly think of of as a client.  The server is what controls&lt;br /&gt;
access to the display: it runs where your display is to control your display,&lt;br /&gt;
mouse, keyboard... And to display a window, remotely or locally, you run a&lt;br /&gt;
program known as a client in X-Windows which connects over the network to&lt;br /&gt;
display a window on your X-Windows server.&lt;br /&gt;
&lt;br /&gt;
A funny thing about X is it took the abstraction to an extreme. The people who&lt;br /&gt;
created X-Windows didn&#039;t know anything about usability or graphics or art. The&lt;br /&gt;
original X-Windows tools were created by regular programmers. Technically&lt;br /&gt;
underneath its very nice.. But they knew they didn&#039;t know, so they made it so&lt;br /&gt;
the user could decide what it should look like themselves. So that you can&lt;br /&gt;
just switch out a few programs and things keep on working.&lt;br /&gt;
&lt;br /&gt;
This means that when you do things like moving your mouse to a window -- what&lt;br /&gt;
happens? Do you take focus or not? This is something known as click to focus.&lt;br /&gt;
In older X Systems, you could just point your mouse there, and focus followed.&lt;br /&gt;
This is potentially very efficient, but also very confusing if you&#039;re not used&lt;br /&gt;
to it... Or how do you handle key sequences, or minimize? Who decides how to&lt;br /&gt;
do this all? They had the idea of something called a Window Manager. This goes&lt;br /&gt;
back to X Servers providing the technical minimums so that you&#039;re not limited&lt;br /&gt;
to one behaviour. The Window Manager is just another X client, with some&lt;br /&gt;
special privileges so it can run anywhere. It could run 1000s of miles away.&lt;br /&gt;
&lt;br /&gt;
This is why on Linux there&#039;s Gnome, KDE, etc. There&#039;s Motif, GTK, QT, IceWM,&lt;br /&gt;
Aferstep, Blackbox, Sawfish, fvwm, twm. Etc. Other graphical toolkits too,&lt;br /&gt;
abstracted away. These choices are all available there because the X-Windows&lt;br /&gt;
people left it very open by not making the choice for us. This does make&lt;br /&gt;
things a little confusing at times, though, because each application could&lt;br /&gt;
have different assumptions.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Using_the_Operating_System&amp;diff=1452</id>
		<title>Using the Operating System</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Using_the_Operating_System&amp;diff=1452"/>
		<updated>2007-09-18T00:59:45Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Processes and Threads */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These notes are have not yet been reviewed for correctness.&lt;br /&gt;
&lt;br /&gt;
== Lecture 3: Using the Operating System ==&lt;br /&gt;
&lt;br /&gt;
=== Administrative ===&lt;br /&gt;
&lt;br /&gt;
==== Course Notes ====&lt;br /&gt;
The course webpage has been changed to point to this wiki.  Notes for lectures will continue to be posted here.&lt;br /&gt;
&lt;br /&gt;
We still need volunteers to take notes and put them up.  Don&#039;t forget the up to 3% bonus for doing so.&lt;br /&gt;
&lt;br /&gt;
==== Lab 1 ====&lt;br /&gt;
&lt;br /&gt;
Lab 1 will be up soon. Starting tomorrow, there are labs. Please show up.&lt;br /&gt;
&lt;br /&gt;
If you&#039;re clever, you can probably find most of the answers online.  Avoid looking up the answers&lt;br /&gt;
though, because you&#039;ll learn much more than the just answers if you explore&lt;br /&gt;
while using the computer.&lt;br /&gt;
&lt;br /&gt;
You&#039;ll do better on the tests if you do the labs.&lt;br /&gt;
&lt;br /&gt;
The point of this course is to build up a conceptual model of&lt;br /&gt;
how computers work.  This conceptual model is not made up of answers, its made up&lt;br /&gt;
of connections.  You&#039;ll start to make these connections by doing the labs.&lt;br /&gt;
&lt;br /&gt;
The Lab will posted as a PDF.  You can print it out to bring with it with you.&lt;br /&gt;
When you go to hand in the lab, print off your answers on a separate piece of paper.&lt;br /&gt;
Answers will be due in two weeks.&lt;br /&gt;
&lt;br /&gt;
All functioning lab machines are running Debian Linux 4.0 (etch).&lt;br /&gt;
They should be connected to the internet.  They should have a browser on them&lt;br /&gt;
called [IceWeasel http://en.wikipedia.org/wiki/Naming_conflict_between_Debian_and_Mozilla]. &lt;br /&gt;
Its really Mozilla Firefox.  Because Mozilla has trademarked the name Firefox,&lt;br /&gt;
in order to use their name, you have to use exactly their binary distribution.  Debian could&lt;br /&gt;
have had a waiver, but because Debian is about freedom, Debian didn&#039;t want&lt;br /&gt;
the users of Debian to be bound by the terms of the Firefox agreement.&lt;br /&gt;
&lt;br /&gt;
One thing you&#039;ll notice while studying operating systems is that there&#039;s a lot&lt;br /&gt;
of culture.  This is because users get used to a particular way of doing things.&lt;br /&gt;
For example, lots of us are probably used to Windows and how it works.  If&lt;br /&gt;
you changed the fundamentals of how Windows worked, many of us would be unhappy.&lt;br /&gt;
&lt;br /&gt;
Some of the things we&#039;ll be studying are based on decisions made long ago, often &lt;br /&gt;
arbitrarily of for a technical reason was true at the time.&lt;br /&gt;
Even if it was wrong the, or is now wrong, we&#039;re often stuck with it.&lt;br /&gt;
&lt;br /&gt;
We&#039;re going to look at some of the baggage in the operating systems as we progress in this course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the lab, we&#039;ll be given a set of questions in 2 parts.&lt;br /&gt;
&lt;br /&gt;
Part A we should be able to finish during the hour, give or take a few minutes.&lt;br /&gt;
If it takes you 3-4 hours, you&#039;re probably doing something wrong.&lt;br /&gt;
&lt;br /&gt;
Part B will take longer. Will need more research, and a bit more reading. You&lt;br /&gt;
should be working together. If you have trouble finding a buddy, talk to Dr.&lt;br /&gt;
S. Talk to each other to learn. The purpose isn&#039;t just about the right&lt;br /&gt;
answers, but more so about the operating systems.&lt;br /&gt;
&lt;br /&gt;
We&#039;re going to look at:&lt;br /&gt;
&lt;br /&gt;
- Processes, and how Unix deals with them.&lt;br /&gt;
- How are the parts of the system divided up.&lt;br /&gt;
- Dynamic libraries, where they are, where they fit in memory.&lt;br /&gt;
- What are the dependencies.&lt;br /&gt;
- How does the graphical subsystem fit in there.  X-Windows, the classical&lt;br /&gt;
Unix graphical environment.&lt;br /&gt;
- Practice on the command line. (If you&#039;re wondering where these fit in&lt;br /&gt;
in modern graphical environments, read Neal Stephenson&#039;s essay)&lt;br /&gt;
&lt;br /&gt;
Try to finish part A in the lab.  &lt;br /&gt;
Answers due in class 2 weeks from today.&lt;br /&gt;
&lt;br /&gt;
==== Term Paper ====&lt;br /&gt;
&lt;br /&gt;
For the Term Paper... You don&#039;t have to do a pure literature review.  You could&lt;br /&gt;
do an original operating system extension.  There&#039;s one caveat:  You&#039;ve got&lt;br /&gt;
to get what you&#039;re going to do with the professor first.  You have to tell&lt;br /&gt;
him a week before the outline is due Oct 22nd.&lt;br /&gt;
&lt;br /&gt;
What types of things is he thinking of: Say you wanted to implement a new&lt;br /&gt;
file-system? This is inherently more work, because you still have to give a&lt;br /&gt;
nice write-up. The report should still cite other work.&lt;br /&gt;
&lt;br /&gt;
All of us should have started the process by next week, even if its just&lt;br /&gt;
googling for 15 minutes. Just google and see what results come up. IF you&lt;br /&gt;
start now, you&#039;ll have time to pick a topic that you like, instead of the&lt;br /&gt;
first thing that comes along.  Its better to work on something you like,&lt;br /&gt;
instead of stuck reading papers you&#039;re not interested in.&lt;br /&gt;
&lt;br /&gt;
If you want to find good OS papers:&lt;br /&gt;
* USENIX association has a number of systems oriented conferences.&lt;br /&gt;
** OSDI &lt;br /&gt;
** USENIX Annual Technical Conference&lt;br /&gt;
** LISA&lt;br /&gt;
&lt;br /&gt;
=== Using the Operating System ===&lt;br /&gt;
&lt;br /&gt;
Chapter 2 looks at the programming model of an operating system. &lt;br /&gt;
The operating system provides certain abstractions to help programmers work with it.&lt;br /&gt;
&lt;br /&gt;
What are some examples of abstractions?&lt;br /&gt;
&lt;br /&gt;
==== Files ====&lt;br /&gt;
&lt;br /&gt;
A file is a metaphor.  What was the original metaphor?  The manilla&lt;br /&gt;
coloured folder that we put paper in.  Its interesting to note that a file is used&lt;br /&gt;
to hold many pages or documents, but that a computer file is a single document.  Instead, a directory&lt;br /&gt;
holds many files, which are each generally one document. &lt;br /&gt;
The metaphor hasn&#039;t made much sense for a long time, but it is still in use.&lt;br /&gt;
&lt;br /&gt;
What is a file?  &lt;br /&gt;
&lt;br /&gt;
A file is a bytestream you can read and write from.&lt;br /&gt;
&lt;br /&gt;
We also have an abstraction called a byte, 256 possible values, 0-255.&lt;br /&gt;
We as computer scientists think we can represent just about anything&lt;br /&gt;
with these.&lt;br /&gt;
&lt;br /&gt;
; file :  named bytestream(s). &lt;br /&gt;
&lt;br /&gt;
In modern operating systems there are potentially more than &lt;br /&gt;
one bytestream in a file.  When there is more than one bytestream, we&lt;br /&gt;
call this a forked file.&lt;br /&gt;
&lt;br /&gt;
An early operating system that used forked files was OS 9.&lt;br /&gt;
On a traditional system, you get a sequence of bytes&lt;br /&gt;
when you open a file.  In a forked file, when you read it, you get some data,&lt;br /&gt;
but there is also other data hanging around.  We&#039;ll talk about that later.&lt;br /&gt;
&lt;br /&gt;
The standard API calls for a file are:&lt;br /&gt;
&lt;br /&gt;
* open&lt;br /&gt;
* read&lt;br /&gt;
* write&lt;br /&gt;
* close&lt;br /&gt;
* seek&lt;br /&gt;
&lt;br /&gt;
As well as a other operations that one might need to perform on files, such as:&lt;br /&gt;
* truncate&lt;br /&gt;
* append - (seek to end of file and write)&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
Why open and close?  Why can&#039;t we just operate on a filename?  &lt;br /&gt;
Because it (usually) takes a long time to go through the filesystem to find the files.&lt;br /&gt;
Open and close are optimizations -- the abstraction is a stateful interface.&lt;br /&gt;
You start by using open to obtain some sort of &amp;quot;handle&amp;quot; representing the file, &lt;br /&gt;
and pass this &amp;quot;handle&amp;quot; value to read and write.  When you&#039;re done, closing the &lt;br /&gt;
file frees the resources allocated when opening the file.  On most systems&lt;br /&gt;
you can only have a specific number of files open at any given time.&lt;br /&gt;
&lt;br /&gt;
There are some filesystems where open and close don&#039;t do much of anything,&lt;br /&gt;
such as some networked filesystems.&lt;br /&gt;
&lt;br /&gt;
File represent storage, on disks... They&#039;re random access.  If they&#039;re random access&lt;br /&gt;
like RAM, why don&#039;t we access disks like we access RAM?   &lt;br /&gt;
Why couldn&#039;t we just allocate objects as we need them?  We could indeed do this, but&lt;br /&gt;
it turns out that there&#039;s a reason that we don&#039;t generally do this.&lt;br /&gt;
&lt;br /&gt;
The file interface is a procedural interface.  &lt;br /&gt;
&lt;br /&gt;
One nice things about files is they&#039;re a minimal functionality interface.  The concept of&lt;br /&gt;
minimal functionality is a recurring theme you&#039;ll find when we discuss  filesystems.  &lt;br /&gt;
&lt;br /&gt;
The abstraction used to interface the filesystem&lt;br /&gt;
shouldn&#039;t prohibit you from creating particular forms of applications.  If we chose &lt;br /&gt;
to use an object model, we&#039;d be implying you don&#039;t want to give arbitrary access to the&lt;br /&gt;
data on disk, as objects tend to encapsulate their data.&lt;br /&gt;
&lt;br /&gt;
The abstraction listed above is the minimal abstraction for efficiently&lt;br /&gt;
managing persistent storage (disks).&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t necessarily mean this is the most absolutely minimal abstraction.  &lt;br /&gt;
An even more minimal abstraction would be&lt;br /&gt;
to just treat storage devices as a bunch of fixed size blocks.  However, that&#039;s getting too low level,&lt;br /&gt;
because now all programs have to worry about where they put files.&lt;br /&gt;
&lt;br /&gt;
Because the files abstract model is reasonably good, its stuck around for decades.&lt;br /&gt;
&lt;br /&gt;
Fundamentally, though, its a legacy.  Some models of filesystems try to get&lt;br /&gt;
away from it.  Look at the PalmOS - it resisted having files for a long time, but eventually&lt;br /&gt;
gave in to support removable media, but the primary OS and API still don&#039;t support files.  &lt;br /&gt;
Microsoft&#039;s been wanting to get away from the legacy files abstraction too, but &lt;br /&gt;
somehow it doesn&#039;t seem to happen.&lt;br /&gt;
&lt;br /&gt;
==== Processes and Threads ====&lt;br /&gt;
&lt;br /&gt;
There&#039;s lots of other devices, but from an OS level, there are two other big ones:&lt;br /&gt;
CPU and RAM.  These two are generally abstracted with processes.&lt;br /&gt;
The process is the basic abstraction in operating systems for these two,&lt;br /&gt;
but is not the only abstraction.  There are also threads.&lt;br /&gt;
&lt;br /&gt;
CPU + RAM are abstracted as:&lt;br /&gt;
* processes&lt;br /&gt;
* threads&lt;br /&gt;
&lt;br /&gt;
A process may have multiple threads.  A thread shares memory with its processes.&lt;br /&gt;
&lt;br /&gt;
* A process is an exclusive allocation of CPU and RAM.&lt;br /&gt;
* A thread is a non-exclusive allocation of RAM within a process,&lt;br /&gt;
but is an exclusive allocation of CPU.&lt;br /&gt;
* One or more threads constitute a process.&lt;br /&gt;
&lt;br /&gt;
Another way to talk about processes is in terms of address spaces and&lt;br /&gt;
execution context: &lt;br /&gt;
* An address space is just a virtual version of RAM. It may&lt;br /&gt;
be instantiated in physical memory, it may not be. Its a set of addresses you&lt;br /&gt;
can call your own. &lt;br /&gt;
* Execution context is CPU state (Registers, processor status&lt;br /&gt;
words, etc.). There&#039;s lots of state surrounding the processor when its running&lt;br /&gt;
a program.  This state can be saved, and then restored later to resume execution&lt;br /&gt;
at a later time.&lt;br /&gt;
&lt;br /&gt;
* A thread is one execution context matched with an address space.&lt;br /&gt;
* A process is one or more execution contexts plus an address space.&lt;br /&gt;
* A single-threaded process has one execution context, and one address space.&lt;br /&gt;
* A multithreaded process has multiple execution contexts, and one address space.&lt;br /&gt;
&lt;br /&gt;
The concept of multiple address spaces is somewhat new in modern computing.&lt;br /&gt;
However, if you go back to the old days of MS-DOS, there was only one address space, the&lt;br /&gt;
physical address space.  We used to have things like TSRs, a 640kb limit,&lt;br /&gt;
etc.  There was no virtualization of memory.  In order to run at the same&lt;br /&gt;
time, they had to co-exist in the physical memory address space.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have multiple address spaces, you don&#039;t have processes and threads.&lt;br /&gt;
At best, you have threads, sharing the one address space you have.&lt;br /&gt;
&lt;br /&gt;
Historically, threads were abstracted differently than now: These are&lt;br /&gt;
capitalized here to differentiate them from the newer terms: FORK, JOIN, QUIT.&lt;br /&gt;
&lt;br /&gt;
Why FORK?  Think of a fork in the road.  You&#039;re going along, then things split.&lt;br /&gt;
A FORK is supposed to represent that split.&lt;br /&gt;
&lt;br /&gt;
By FORKing, the main thing to note is that you&#039;re creating two execution&lt;br /&gt;
contexts, that may be sharing memory. The execution may start at the same&lt;br /&gt;
place, but may be diverging. How do you stop creating more and more and more&lt;br /&gt;
of these, to bring them back under control or stop them? That&#039;s the JOIN&lt;br /&gt;
operation. Each thread tracks how many threads are running, if you JOIN and&lt;br /&gt;
you&#039;re not the last one running, you just go away, otherwise you need to&lt;br /&gt;
synchronize back into the main thread.&lt;br /&gt;
&lt;br /&gt;
What&#039;s QUIT? QUIT stops the the whole program -- all execution. It will cut&lt;br /&gt;
all threads off, even if the thread is one of the branches and not the main&lt;br /&gt;
thread.&lt;br /&gt;
&lt;br /&gt;
This was one of the earliest ways to abstract multiple execution context.&lt;br /&gt;
&lt;br /&gt;
What if, when you did the fork, you made a copy of the entire process? There&lt;br /&gt;
are now two separate instances of the program, with the same state? The&lt;br /&gt;
difference here, is if you quit one, the other will stay around -- but the&lt;br /&gt;
difference is more profound: they&#039;re not sharing the same address space (nor&lt;br /&gt;
execution context). This is the Unix model of processes.&lt;br /&gt;
&lt;br /&gt;
In the Unix process model the system starts with only one process: init. It&lt;br /&gt;
starts running, then it creates a copy of itself with fork, then another, etc.&lt;br /&gt;
&lt;br /&gt;
[[Image:Comp3000-process-tree.png]]&lt;br /&gt;
&lt;br /&gt;
In this diagram, What is the value of &#039;&#039;x?&#039;&#039; on the bottom-left-most branch?&lt;br /&gt;
&#039;&#039;x&#039;&#039; is 5 in the Unix process model. However, if this was multithreaded,&lt;br /&gt;
&#039;&#039;x&#039;&#039; could be 7 or 5, depending on how fast the threads are running. It might&lt;br /&gt;
be 5 if the thread asking for the value of &#039;&#039;x&#039;&#039; runs before the thread setting &#039;&#039;x&#039;&#039;&lt;br /&gt;
to 7. This is known as a race condition, &lt;br /&gt;
because we don&#039;t know which thread will run or finish first.&lt;br /&gt;
&lt;br /&gt;
In Unix, they decided to make it easy and have different processes. These&lt;br /&gt;
processes can&#039;t change the state of their parents or children. &lt;br /&gt;
To share a value, you have to set the value before forking.  (Or through other means)&lt;br /&gt;
&lt;br /&gt;
There&#039;s a small glitch with what we&#039;ve said so far about Unix processes: That&lt;br /&gt;
they have exactly the same state when you fork. If this was true, they&#039;d&lt;br /&gt;
always do the same thing.  How do they know that they&#039;re different?&lt;br /&gt;
&lt;br /&gt;
Turns out that Unix fork is very simple, yet it helps with this.&lt;br /&gt;
The idiom you&#039;ll usually see is:&lt;br /&gt;
  &lt;br /&gt;
 pid = fork();&lt;br /&gt;
  &lt;br /&gt;
fork takes no arguments.&lt;br /&gt;
&lt;br /&gt;
When you fork, the result of fork is the pid (process ID) of the new process,&lt;br /&gt;
or 0 if you&#039;re the child.&lt;br /&gt;
&lt;br /&gt;
The tree of processes effectively becomes a family tree. (However, with some&lt;br /&gt;
bizarre genealogy that we&#039;ll see later)&lt;br /&gt;
&lt;br /&gt;
What you usually do is check the value of pid, and if its 0, do one thing,&lt;br /&gt;
otherwise do something else. If pid is nonzero, it is the pid of the child&lt;br /&gt;
process we just created by forking. You usually use this to track your child.&lt;br /&gt;
The classic use of fork is to create disposable children that do a specific&lt;br /&gt;
task for a short while, then go away.&lt;br /&gt;
&lt;br /&gt;
The nice thing about this model is that it keeps things separate. You don&#039;t&lt;br /&gt;
need to worry about what the child is doing. If you want to communicate, you&lt;br /&gt;
have to explicitly set up to do this. There are some standard ways of doing&lt;br /&gt;
that communication.  We&#039;ll look at these later too.&lt;br /&gt;
&lt;br /&gt;
So now we know how to make new processes? How do we do something different? In&lt;br /&gt;
principle we don&#039;t need anything else. We could open a file, read new code,&lt;br /&gt;
then jump to the new code.  However, we have the idea of exec(). &lt;br /&gt;
Exec  replaces the running program with the specified program, but preserves&lt;br /&gt;
the pid.&lt;br /&gt;
&lt;br /&gt;
In Unix, to start a new program you usually fork() then you exec() the desired&lt;br /&gt;
program on the child.  If you don&#039;t fork() first, then exec() will kill the&lt;br /&gt;
original process, replacing it with the program you called exec on.&lt;br /&gt;
&lt;br /&gt;
Exec causes the kernel to throw away the old address space, and give a new&lt;br /&gt;
address space, with the new binary.  The pid stays the same though.&lt;br /&gt;
&lt;br /&gt;
The Windows equivalent is CreateProcess()&lt;br /&gt;
&lt;br /&gt;
CreateProcess() takes lots of arguments about how to create the new&lt;br /&gt;
process (what to load, permissions, etc).  Fork takes none.  With fork(), &lt;br /&gt;
you can set things up yourself, and most of the settings will carry over to&lt;br /&gt;
the new program. (Including open files).  Note how different these two are.&lt;br /&gt;
&lt;br /&gt;
In Unix, you have the building blocks to do things, and you have to put them&lt;br /&gt;
together yourself. In Windows, you have the single API call to do them all at&lt;br /&gt;
once. Neither is strictly right or wrong.&lt;br /&gt;
&lt;br /&gt;
On older systems, when a big process was forked, everything was copied. On&lt;br /&gt;
newer systems, fork doesn&#039;t necessarily copy everything. With virtual memory&lt;br /&gt;
you can share much of the memory between two processes.&lt;br /&gt;
&lt;br /&gt;
In older APIs there was vfork() - suspend parent, fork, exec, then let the&lt;br /&gt;
parent and child both start to go again.  This idea avoided the copying when&lt;br /&gt;
the first thing you were going to do was exec.&lt;br /&gt;
&lt;br /&gt;
The basic idea to make this efficient is that the descriptions of the virtual&lt;br /&gt;
memory address spaces don&#039;t have to be mutually exclusive.  You could have&lt;br /&gt;
10 programs sharing portions of their address space -- such as the read-only&lt;br /&gt;
portions like the program code, but not the read-write portions.&lt;br /&gt;
&lt;br /&gt;
What if you didn&#039;t want to do an exec after forking? A classic one is a&lt;br /&gt;
daemon. One listening on the network for an incoming connection. When that&lt;br /&gt;
incoming request comes in, the main program can deal with the request, but it&lt;br /&gt;
would also have to keep checking for more requests at the same time. Instead,&lt;br /&gt;
in Unix the typically idiom is to fork off a child to process that connection,&lt;br /&gt;
and then go back and wait for more.&lt;br /&gt;
&lt;br /&gt;
You can have shared memory.  But the default model for processes is&lt;br /&gt;
that nothing is shared, but the threading model is everything is shared.&lt;br /&gt;
For threads you have to implement protections, but for processes, you have to&lt;br /&gt;
opt-in to share.&lt;br /&gt;
&lt;br /&gt;
Processes win out on reliability: fewer chances for errors.  You control&lt;br /&gt;
exactly what state is shared.&lt;br /&gt;
&lt;br /&gt;
Another thing we&#039;ll talk about later regarding threads versus processes is&lt;br /&gt;
how does this play on multiple cores?  This depends on the implementation,&lt;br /&gt;
and sometimes is a little tricky.&lt;br /&gt;
&lt;br /&gt;
Chapter two is talking about the model presented to the programmer.  An API&lt;br /&gt;
for your processes and threads to talk to the world.&lt;br /&gt;
&lt;br /&gt;
This course is fundamentally about how these things are implemented. Its&lt;br /&gt;
useful to know about these tricks, so that you know how the computer is used.&lt;br /&gt;
It turns out the same tricks are useful in lots of other circumstances. Such&lt;br /&gt;
as concurrency - when you have to deal with an application that has to deal&lt;br /&gt;
with this - which happens to be most applications. You&#039;ll learn this because&lt;br /&gt;
the OS guys did this first.&lt;br /&gt;
&lt;br /&gt;
==== Graphics ====&lt;br /&gt;
&lt;br /&gt;
This part of the lecture should help you with the lab.  Its about graphics.&lt;br /&gt;
&lt;br /&gt;
We&#039;ve talked about some standard abstractions so far: files, processes, threads.&lt;br /&gt;
&lt;br /&gt;
However, the thing you really interact with is the keyboard, mouse, and&lt;br /&gt;
display. In the standard Unix model, these are not a part of the operating&lt;br /&gt;
system. They&#039;re implemented in an application.&lt;br /&gt;
&lt;br /&gt;
The Unix philosophy is that if you don&#039;t have to put it in the kernel,&lt;br /&gt;
don&#039;t put it there, or if you do, make it interchangeable.&lt;br /&gt;
&lt;br /&gt;
The standard way to do graphics in Unix is X-Windows, or X for short.&lt;br /&gt;
Before X there was the W system.  There was a Y system at one point,&lt;br /&gt;
as well as Sun NeWs.&lt;br /&gt;
&lt;br /&gt;
There was also a system called Display Postscript.  Postscript is a&lt;br /&gt;
fully fledged programming language.  Originally used for printers.&lt;br /&gt;
It was developed for laser printers, by a little company called Adobe.&lt;br /&gt;
When laser printers came out, they had really high resolutions.  It was&lt;br /&gt;
hard to get the data necessary to print a page to the printer fast&lt;br /&gt;
enough...  So postscript programs were sent to the printer.  In the&lt;br /&gt;
early days of the MacIntosh, the processor in the printer was more powerful&lt;br /&gt;
than the processor in the computer.  Postscript is a funny little language.&lt;br /&gt;
Its a post-fix operator language.  Instead of saying things like &amp;quot;4+5&amp;quot; you&lt;br /&gt;
say &amp;quot;4 5 +&amp;quot; -- you push them onto the stack, then run an operator on them.&lt;br /&gt;
The same with function calls.&lt;br /&gt;
&lt;br /&gt;
In the 80s, there were many competing technologies for how to do graphics&lt;br /&gt;
in the Unix world.  X won.  But Display Postscript also kind of won,&lt;br /&gt;
because Macs use Display PDF in a system called Quartz, which was&lt;br /&gt;
created as a successor to Display Postscript. Because Postscript was linear,&lt;br /&gt;
it was hard to parallelize.  PDF is easier to parallelize.&lt;br /&gt;
&lt;br /&gt;
NeXT was the one that used Display Postscript first... NeXT was founded by&lt;br /&gt;
Steve Jobs. OS X is Unix with Display PDF... And you can run X-Windows on top&lt;br /&gt;
of that.&lt;br /&gt;
&lt;br /&gt;
X-Windows lets you open windows on remote computers. The way you create a&lt;br /&gt;
window on your local computer is the same way that you open a window on a&lt;br /&gt;
remote computer, 1000s of miles away. X is based on something called the X&lt;br /&gt;
Window Protocol. It just happens to work locally as well (with some&lt;br /&gt;
optimization like shared memory), but the messages were designed to work well&lt;br /&gt;
over ethernet.&lt;br /&gt;
&lt;br /&gt;
This was created by folks that wanted to talk to hundreds of computers, such&lt;br /&gt;
as the supercomputer in another room... but they wanted to see the windows&lt;br /&gt;
on their own computer.&lt;br /&gt;
&lt;br /&gt;
Consider what you have to do to see a remote window in Windows. You fire up&lt;br /&gt;
Remote Desktop Client, and you get the whole desktop remotely. If you want to&lt;br /&gt;
do 10 computers, you end up with 10 windows with 10 desktops and 10 start&lt;br /&gt;
buttons. This difference is a result of X-Windows being designed for networks&lt;br /&gt;
and Windows being designed for one computer.&lt;br /&gt;
&lt;br /&gt;
The terminology for X-Windows is a bit backwards from what we&#039;re used to: The&lt;br /&gt;
server is what we mostly think of of as a client.  The server is what controls&lt;br /&gt;
access to the display: it runs where your display is to control your display,&lt;br /&gt;
mouse, keyboard... And to display a window, remotely or locally, you run a&lt;br /&gt;
program known as a client in X-Windows which connects over the network to&lt;br /&gt;
display a window on your X-Windows server.&lt;br /&gt;
&lt;br /&gt;
A funny thing about X is it took the abstraction to an extreme. The people who&lt;br /&gt;
created X-Windows didn&#039;t know anything about usability or graphics or art. The&lt;br /&gt;
original X-Windows tools were created by regular programmers. Technically&lt;br /&gt;
underneath its very nice.. But they knew they didn&#039;t know, so they made it so&lt;br /&gt;
the user could decide what it should look like themselves. So that you can&lt;br /&gt;
just switch out a few programs and things keep on working.&lt;br /&gt;
&lt;br /&gt;
This means that when you do things like moving your mouse to a window -- what&lt;br /&gt;
happens? Do you take focus or not? This is something known as click to focus.&lt;br /&gt;
In older X Systems, you could just point your mouse there, and focus followed.&lt;br /&gt;
This is potentially very efficient, but also very confusing if you&#039;re not used&lt;br /&gt;
to it... Or how do you handle key sequences, or minimize? Who decides how to&lt;br /&gt;
do this all? They had the idea of something called a Window Manager. This goes&lt;br /&gt;
back to X Servers providing the technical minimums so that you&#039;re not limited&lt;br /&gt;
to one behaviour. The Window Manager is just another X client, with some&lt;br /&gt;
special privileges so it can run anywhere. It could run 1000s of miles away.&lt;br /&gt;
&lt;br /&gt;
This is why on Linux there&#039;s Gnome, KDE, etc. There&#039;s Motif, GTK, QT, IceWM,&lt;br /&gt;
Aferstep, Blackbox, Sawfish, fvwm, twm. Etc. Other graphical toolkits too,&lt;br /&gt;
abstracted away. These choices are all available there because the X-Windows&lt;br /&gt;
people left it very open by not making the choice for us. This does make&lt;br /&gt;
things a little confusing at times, though, because each application could&lt;br /&gt;
have different assumptions.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Using_the_Operating_System&amp;diff=1451</id>
		<title>Using the Operating System</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Using_the_Operating_System&amp;diff=1451"/>
		<updated>2007-09-18T00:59:18Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: /* Files */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These notes are have not yet been reviewed for correctness.&lt;br /&gt;
&lt;br /&gt;
== Lecture 3: Using the Operating System ==&lt;br /&gt;
&lt;br /&gt;
=== Administrative ===&lt;br /&gt;
&lt;br /&gt;
==== Course Notes ====&lt;br /&gt;
The course webpage has been changed to point to this wiki.  Notes for lectures will continue to be posted here.&lt;br /&gt;
&lt;br /&gt;
We still need volunteers to take notes and put them up.  Don&#039;t forget the up to 3% bonus for doing so.&lt;br /&gt;
&lt;br /&gt;
==== Lab 1 ====&lt;br /&gt;
&lt;br /&gt;
Lab 1 will be up soon. Starting tomorrow, there are labs. Please show up.&lt;br /&gt;
&lt;br /&gt;
If you&#039;re clever, you can probably find most of the answers online.  Avoid looking up the answers&lt;br /&gt;
though, because you&#039;ll learn much more than the just answers if you explore&lt;br /&gt;
while using the computer.&lt;br /&gt;
&lt;br /&gt;
You&#039;ll do better on the tests if you do the labs.&lt;br /&gt;
&lt;br /&gt;
The point of this course is to build up a conceptual model of&lt;br /&gt;
how computers work.  This conceptual model is not made up of answers, its made up&lt;br /&gt;
of connections.  You&#039;ll start to make these connections by doing the labs.&lt;br /&gt;
&lt;br /&gt;
The Lab will posted as a PDF.  You can print it out to bring with it with you.&lt;br /&gt;
When you go to hand in the lab, print off your answers on a separate piece of paper.&lt;br /&gt;
Answers will be due in two weeks.&lt;br /&gt;
&lt;br /&gt;
All functioning lab machines are running Debian Linux 4.0 (etch).&lt;br /&gt;
They should be connected to the internet.  They should have a browser on them&lt;br /&gt;
called [IceWeasel http://en.wikipedia.org/wiki/Naming_conflict_between_Debian_and_Mozilla]. &lt;br /&gt;
Its really Mozilla Firefox.  Because Mozilla has trademarked the name Firefox,&lt;br /&gt;
in order to use their name, you have to use exactly their binary distribution.  Debian could&lt;br /&gt;
have had a waiver, but because Debian is about freedom, Debian didn&#039;t want&lt;br /&gt;
the users of Debian to be bound by the terms of the Firefox agreement.&lt;br /&gt;
&lt;br /&gt;
One thing you&#039;ll notice while studying operating systems is that there&#039;s a lot&lt;br /&gt;
of culture.  This is because users get used to a particular way of doing things.&lt;br /&gt;
For example, lots of us are probably used to Windows and how it works.  If&lt;br /&gt;
you changed the fundamentals of how Windows worked, many of us would be unhappy.&lt;br /&gt;
&lt;br /&gt;
Some of the things we&#039;ll be studying are based on decisions made long ago, often &lt;br /&gt;
arbitrarily of for a technical reason was true at the time.&lt;br /&gt;
Even if it was wrong the, or is now wrong, we&#039;re often stuck with it.&lt;br /&gt;
&lt;br /&gt;
We&#039;re going to look at some of the baggage in the operating systems as we progress in this course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the lab, we&#039;ll be given a set of questions in 2 parts.&lt;br /&gt;
&lt;br /&gt;
Part A we should be able to finish during the hour, give or take a few minutes.&lt;br /&gt;
If it takes you 3-4 hours, you&#039;re probably doing something wrong.&lt;br /&gt;
&lt;br /&gt;
Part B will take longer. Will need more research, and a bit more reading. You&lt;br /&gt;
should be working together. If you have trouble finding a buddy, talk to Dr.&lt;br /&gt;
S. Talk to each other to learn. The purpose isn&#039;t just about the right&lt;br /&gt;
answers, but more so about the operating systems.&lt;br /&gt;
&lt;br /&gt;
We&#039;re going to look at:&lt;br /&gt;
&lt;br /&gt;
- Processes, and how Unix deals with them.&lt;br /&gt;
- How are the parts of the system divided up.&lt;br /&gt;
- Dynamic libraries, where they are, where they fit in memory.&lt;br /&gt;
- What are the dependencies.&lt;br /&gt;
- How does the graphical subsystem fit in there.  X-Windows, the classical&lt;br /&gt;
Unix graphical environment.&lt;br /&gt;
- Practice on the command line. (If you&#039;re wondering where these fit in&lt;br /&gt;
in modern graphical environments, read Neal Stephenson&#039;s essay)&lt;br /&gt;
&lt;br /&gt;
Try to finish part A in the lab.  &lt;br /&gt;
Answers due in class 2 weeks from today.&lt;br /&gt;
&lt;br /&gt;
==== Term Paper ====&lt;br /&gt;
&lt;br /&gt;
For the Term Paper... You don&#039;t have to do a pure literature review.  You could&lt;br /&gt;
do an original operating system extension.  There&#039;s one caveat:  You&#039;ve got&lt;br /&gt;
to get what you&#039;re going to do with the professor first.  You have to tell&lt;br /&gt;
him a week before the outline is due Oct 22nd.&lt;br /&gt;
&lt;br /&gt;
What types of things is he thinking of: Say you wanted to implement a new&lt;br /&gt;
file-system? This is inherently more work, because you still have to give a&lt;br /&gt;
nice write-up. The report should still cite other work.&lt;br /&gt;
&lt;br /&gt;
All of us should have started the process by next week, even if its just&lt;br /&gt;
googling for 15 minutes. Just google and see what results come up. IF you&lt;br /&gt;
start now, you&#039;ll have time to pick a topic that you like, instead of the&lt;br /&gt;
first thing that comes along.  Its better to work on something you like,&lt;br /&gt;
instead of stuck reading papers you&#039;re not interested in.&lt;br /&gt;
&lt;br /&gt;
If you want to find good OS papers:&lt;br /&gt;
* USENIX association has a number of systems oriented conferences.&lt;br /&gt;
** OSDI &lt;br /&gt;
** USENIX Annual Technical Conference&lt;br /&gt;
** LISA&lt;br /&gt;
&lt;br /&gt;
=== Using the Operating System ===&lt;br /&gt;
&lt;br /&gt;
Chapter 2 looks at the programming model of an operating system. &lt;br /&gt;
The operating system provides certain abstractions to help programmers work with it.&lt;br /&gt;
&lt;br /&gt;
What are some examples of abstractions?&lt;br /&gt;
&lt;br /&gt;
==== Files ====&lt;br /&gt;
&lt;br /&gt;
A file is a metaphor.  What was the original metaphor?  The manilla&lt;br /&gt;
coloured folder that we put paper in.  Its interesting to note that a file is used&lt;br /&gt;
to hold many pages or documents, but that a computer file is a single document.  Instead, a directory&lt;br /&gt;
holds many files, which are each generally one document. &lt;br /&gt;
The metaphor hasn&#039;t made much sense for a long time, but it is still in use.&lt;br /&gt;
&lt;br /&gt;
What is a file?  &lt;br /&gt;
&lt;br /&gt;
A file is a bytestream you can read and write from.&lt;br /&gt;
&lt;br /&gt;
We also have an abstraction called a byte, 256 possible values, 0-255.&lt;br /&gt;
We as computer scientists think we can represent just about anything&lt;br /&gt;
with these.&lt;br /&gt;
&lt;br /&gt;
; file :  named bytestream(s). &lt;br /&gt;
&lt;br /&gt;
In modern operating systems there are potentially more than &lt;br /&gt;
one bytestream in a file.  When there is more than one bytestream, we&lt;br /&gt;
call this a forked file.&lt;br /&gt;
&lt;br /&gt;
An early operating system that used forked files was OS 9.&lt;br /&gt;
On a traditional system, you get a sequence of bytes&lt;br /&gt;
when you open a file.  In a forked file, when you read it, you get some data,&lt;br /&gt;
but there is also other data hanging around.  We&#039;ll talk about that later.&lt;br /&gt;
&lt;br /&gt;
The standard API calls for a file are:&lt;br /&gt;
&lt;br /&gt;
* open&lt;br /&gt;
* read&lt;br /&gt;
* write&lt;br /&gt;
* close&lt;br /&gt;
* seek&lt;br /&gt;
&lt;br /&gt;
As well as a other operations that one might need to perform on files, such as:&lt;br /&gt;
* truncate&lt;br /&gt;
* append - (seek to end of file and write)&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
Why open and close?  Why can&#039;t we just operate on a filename?  &lt;br /&gt;
Because it (usually) takes a long time to go through the filesystem to find the files.&lt;br /&gt;
Open and close are optimizations -- the abstraction is a stateful interface.&lt;br /&gt;
You start by using open to obtain some sort of &amp;quot;handle&amp;quot; representing the file, &lt;br /&gt;
and pass this &amp;quot;handle&amp;quot; value to read and write.  When you&#039;re done, closing the &lt;br /&gt;
file frees the resources allocated when opening the file.  On most systems&lt;br /&gt;
you can only have a specific number of files open at any given time.&lt;br /&gt;
&lt;br /&gt;
There are some filesystems where open and close don&#039;t do much of anything,&lt;br /&gt;
such as some networked filesystems.&lt;br /&gt;
&lt;br /&gt;
File represent storage, on disks... They&#039;re random access.  If they&#039;re random access&lt;br /&gt;
like RAM, why don&#039;t we access disks like we access RAM?   &lt;br /&gt;
Why couldn&#039;t we just allocate objects as we need them?  We could indeed do this, but&lt;br /&gt;
it turns out that there&#039;s a reason that we don&#039;t generally do this.&lt;br /&gt;
&lt;br /&gt;
The file interface is a procedural interface.  &lt;br /&gt;
&lt;br /&gt;
One nice things about files is they&#039;re a minimal functionality interface.  The concept of&lt;br /&gt;
minimal functionality is a recurring theme you&#039;ll find when we discuss  filesystems.  &lt;br /&gt;
&lt;br /&gt;
The abstraction used to interface the filesystem&lt;br /&gt;
shouldn&#039;t prohibit you from creating particular forms of applications.  If we chose &lt;br /&gt;
to use an object model, we&#039;d be implying you don&#039;t want to give arbitrary access to the&lt;br /&gt;
data on disk, as objects tend to encapsulate their data.&lt;br /&gt;
&lt;br /&gt;
The abstraction listed above is the minimal abstraction for efficiently&lt;br /&gt;
managing persistent storage (disks).&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t necessarily mean this is the most absolutely minimal abstraction.  &lt;br /&gt;
An even more minimal abstraction would be&lt;br /&gt;
to just treat storage devices as a bunch of fixed size blocks.  However, that&#039;s getting too low level,&lt;br /&gt;
because now all programs have to worry about where they put files.&lt;br /&gt;
&lt;br /&gt;
Because the files abstract model is reasonably good, its stuck around for decades.&lt;br /&gt;
&lt;br /&gt;
Fundamentally, though, its a legacy.  Some models of filesystems try to get&lt;br /&gt;
away from it.  Look at the PalmOS - it resisted having files for a long time, but eventually&lt;br /&gt;
gave in to support removable media, but the primary OS and API still don&#039;t support files.  &lt;br /&gt;
Microsoft&#039;s been wanting to get away from the legacy files abstraction too, but &lt;br /&gt;
somehow it doesn&#039;t seem to happen.&lt;br /&gt;
&lt;br /&gt;
===== Processes and Threads =====&lt;br /&gt;
&lt;br /&gt;
There&#039;s lots of other devices, but from an OS level, there are two other big ones:&lt;br /&gt;
CPU and RAM.  These two are generally abstracted with processes.&lt;br /&gt;
The process is the basic abstraction in operating systems for these two,&lt;br /&gt;
but is not the only abstraction.  There are also threads.&lt;br /&gt;
&lt;br /&gt;
CPU + RAM are abstracted as:&lt;br /&gt;
* processes&lt;br /&gt;
* threads&lt;br /&gt;
&lt;br /&gt;
A process may have multiple threads.  A thread shares memory with its processes.&lt;br /&gt;
&lt;br /&gt;
* A process is an exclusive allocation of CPU and RAM.&lt;br /&gt;
* A thread is a non-exclusive allocation of RAM within a process,&lt;br /&gt;
but is an exclusive allocation of CPU.&lt;br /&gt;
* One or more threads constitute a process.&lt;br /&gt;
&lt;br /&gt;
Another way to talk about processes is in terms of address spaces and&lt;br /&gt;
execution context: &lt;br /&gt;
* An address space is just a virtual version of RAM. It may&lt;br /&gt;
be instantiated in physical memory, it may not be. Its a set of addresses you&lt;br /&gt;
can call your own. &lt;br /&gt;
* Execution context is CPU state (Registers, processor status&lt;br /&gt;
words, etc.). There&#039;s lots of state surrounding the processor when its running&lt;br /&gt;
a program.  This state can be saved, and then restored later to resume execution&lt;br /&gt;
at a later time.&lt;br /&gt;
&lt;br /&gt;
* A thread is one execution context matched with an address space.&lt;br /&gt;
* A process is one or more execution contexts plus an address space.&lt;br /&gt;
* A single-threaded process has one execution context, and one address space.&lt;br /&gt;
* A multithreaded process has multiple execution contexts, and one address space.&lt;br /&gt;
&lt;br /&gt;
The concept of multiple address spaces is somewhat new in modern computing.&lt;br /&gt;
However, if you go back to the old days of MS-DOS, there was only one address space, the&lt;br /&gt;
physical address space.  We used to have things like TSRs, a 640kb limit,&lt;br /&gt;
etc.  There was no virtualization of memory.  In order to run at the same&lt;br /&gt;
time, they had to co-exist in the physical memory address space.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have multiple address spaces, you don&#039;t have processes and threads.&lt;br /&gt;
At best, you have threads, sharing the one address space you have.&lt;br /&gt;
&lt;br /&gt;
Historically, threads were abstracted differently than now: These are&lt;br /&gt;
capitalized here to differentiate them from the newer terms: FORK, JOIN, QUIT.&lt;br /&gt;
&lt;br /&gt;
Why FORK?  Think of a fork in the road.  You&#039;re going along, then things split.&lt;br /&gt;
A FORK is supposed to represent that split.&lt;br /&gt;
&lt;br /&gt;
By FORKing, the main thing to note is that you&#039;re creating two execution&lt;br /&gt;
contexts, that may be sharing memory. The execution may start at the same&lt;br /&gt;
place, but may be diverging. How do you stop creating more and more and more&lt;br /&gt;
of these, to bring them back under control or stop them? That&#039;s the JOIN&lt;br /&gt;
operation. Each thread tracks how many threads are running, if you JOIN and&lt;br /&gt;
you&#039;re not the last one running, you just go away, otherwise you need to&lt;br /&gt;
synchronize back into the main thread.&lt;br /&gt;
&lt;br /&gt;
What&#039;s QUIT? QUIT stops the the whole program -- all execution. It will cut&lt;br /&gt;
all threads off, even if the thread is one of the branches and not the main&lt;br /&gt;
thread.&lt;br /&gt;
&lt;br /&gt;
This was one of the earliest ways to abstract multiple execution context.&lt;br /&gt;
&lt;br /&gt;
What if, when you did the fork, you made a copy of the entire process? There&lt;br /&gt;
are now two separate instances of the program, with the same state? The&lt;br /&gt;
difference here, is if you quit one, the other will stay around -- but the&lt;br /&gt;
difference is more profound: they&#039;re not sharing the same address space (nor&lt;br /&gt;
execution context). This is the Unix model of processes.&lt;br /&gt;
&lt;br /&gt;
In the Unix process model the system starts with only one process: init. It&lt;br /&gt;
starts running, then it creates a copy of itself with fork, then another, etc.&lt;br /&gt;
&lt;br /&gt;
[[Image:Comp3000-process-tree.png]]&lt;br /&gt;
&lt;br /&gt;
In this diagram, What is the value of &#039;&#039;x?&#039;&#039; on the bottom-left-most branch?&lt;br /&gt;
&#039;&#039;x&#039;&#039; is 5 in the Unix process model. However, if this was multithreaded,&lt;br /&gt;
&#039;&#039;x&#039;&#039; could be 7 or 5, depending on how fast the threads are running. It might&lt;br /&gt;
be 5 if the thread asking for the value of &#039;&#039;x&#039;&#039; runs before the thread setting &#039;&#039;x&#039;&#039;&lt;br /&gt;
to 7. This is known as a race condition, &lt;br /&gt;
because we don&#039;t know which thread will run or finish first.&lt;br /&gt;
&lt;br /&gt;
In Unix, they decided to make it easy and have different processes. These&lt;br /&gt;
processes can&#039;t change the state of their parents or children. &lt;br /&gt;
To share a value, you have to set the value before forking.  (Or through other means)&lt;br /&gt;
&lt;br /&gt;
There&#039;s a small glitch with what we&#039;ve said so far about Unix processes: That&lt;br /&gt;
they have exactly the same state when you fork. If this was true, they&#039;d&lt;br /&gt;
always do the same thing.  How do they know that they&#039;re different?&lt;br /&gt;
&lt;br /&gt;
Turns out that Unix fork is very simple, yet it helps with this.&lt;br /&gt;
The idiom you&#039;ll usually see is:&lt;br /&gt;
  &lt;br /&gt;
 pid = fork();&lt;br /&gt;
  &lt;br /&gt;
fork takes no arguments.&lt;br /&gt;
&lt;br /&gt;
When you fork, the result of fork is the pid (process ID) of the new process,&lt;br /&gt;
or 0 if you&#039;re the child.&lt;br /&gt;
&lt;br /&gt;
The tree of processes effectively becomes a family tree. (However, with some&lt;br /&gt;
bizarre genealogy that we&#039;ll see later)&lt;br /&gt;
&lt;br /&gt;
What you usually do is check the value of pid, and if its 0, do one thing,&lt;br /&gt;
otherwise do something else. If pid is nonzero, it is the pid of the child&lt;br /&gt;
process we just created by forking. You usually use this to track your child.&lt;br /&gt;
The classic use of fork is to create disposable children that do a specific&lt;br /&gt;
task for a short while, then go away.&lt;br /&gt;
&lt;br /&gt;
The nice thing about this model is that it keeps things separate. You don&#039;t&lt;br /&gt;
need to worry about what the child is doing. If you want to communicate, you&lt;br /&gt;
have to explicitly set up to do this. There are some standard ways of doing&lt;br /&gt;
that communication.  We&#039;ll look at these later too.&lt;br /&gt;
&lt;br /&gt;
So now we know how to make new processes? How do we do something different? In&lt;br /&gt;
principle we don&#039;t need anything else. We could open a file, read new code,&lt;br /&gt;
then jump to the new code.  However, we have the idea of exec(). &lt;br /&gt;
Exec  replaces the running program with the specified program, but preserves&lt;br /&gt;
the pid.&lt;br /&gt;
&lt;br /&gt;
In Unix, to start a new program you usually fork() then you exec() the desired&lt;br /&gt;
program on the child.  If you don&#039;t fork() first, then exec() will kill the&lt;br /&gt;
original process, replacing it with the program you called exec on.&lt;br /&gt;
&lt;br /&gt;
Exec causes the kernel to throw away the old address space, and give a new&lt;br /&gt;
address space, with the new binary.  The pid stays the same though.&lt;br /&gt;
&lt;br /&gt;
The Windows equivalent is CreateProcess()&lt;br /&gt;
&lt;br /&gt;
CreateProcess() takes lots of arguments about how to create the new&lt;br /&gt;
process (what to load, permissions, etc).  Fork takes none.  With fork(), &lt;br /&gt;
you can set things up yourself, and most of the settings will carry over to&lt;br /&gt;
the new program. (Including open files).  Note how different these two are.&lt;br /&gt;
&lt;br /&gt;
In Unix, you have the building blocks to do things, and you have to put them&lt;br /&gt;
together yourself. In Windows, you have the single API call to do them all at&lt;br /&gt;
once. Neither is strictly right or wrong.&lt;br /&gt;
&lt;br /&gt;
On older systems, when a big process was forked, everything was copied. On&lt;br /&gt;
newer systems, fork doesn&#039;t necessarily copy everything. With virtual memory&lt;br /&gt;
you can share much of the memory between two processes.&lt;br /&gt;
&lt;br /&gt;
In older APIs there was vfork() - suspend parent, fork, exec, then let the&lt;br /&gt;
parent and child both start to go again.  This idea avoided the copying when&lt;br /&gt;
the first thing you were going to do was exec.&lt;br /&gt;
&lt;br /&gt;
The basic idea to make this efficient is that the descriptions of the virtual&lt;br /&gt;
memory address spaces don&#039;t have to be mutually exclusive.  You could have&lt;br /&gt;
10 programs sharing portions of their address space -- such as the read-only&lt;br /&gt;
portions like the program code, but not the read-write portions.&lt;br /&gt;
&lt;br /&gt;
What if you didn&#039;t want to do an exec after forking? A classic one is a&lt;br /&gt;
daemon. One listening on the network for an incoming connection. When that&lt;br /&gt;
incoming request comes in, the main program can deal with the request, but it&lt;br /&gt;
would also have to keep checking for more requests at the same time. Instead,&lt;br /&gt;
in Unix the typically idiom is to fork off a child to process that connection,&lt;br /&gt;
and then go back and wait for more.&lt;br /&gt;
&lt;br /&gt;
You can have shared memory.  But the default model for processes is&lt;br /&gt;
that nothing is shared, but the threading model is everything is shared.&lt;br /&gt;
For threads you have to implement protections, but for processes, you have to&lt;br /&gt;
opt-in to share.&lt;br /&gt;
&lt;br /&gt;
Processes win out on reliability: fewer chances for errors.  You control&lt;br /&gt;
exactly what state is shared.&lt;br /&gt;
&lt;br /&gt;
Another thing we&#039;ll talk about later regarding threads versus processes is&lt;br /&gt;
how does this play on multiple cores?  This depends on the implementation,&lt;br /&gt;
and sometimes is a little tricky.&lt;br /&gt;
&lt;br /&gt;
Chapter two is talking about the model presented to the programmer.  An API&lt;br /&gt;
for your processes and threads to talk to the world.&lt;br /&gt;
&lt;br /&gt;
This course is fundamentally about how these things are implemented. Its&lt;br /&gt;
useful to know about these tricks, so that you know how the computer is used.&lt;br /&gt;
It turns out the same tricks are useful in lots of other circumstances. Such&lt;br /&gt;
as concurrency - when you have to deal with an application that has to deal&lt;br /&gt;
with this - which happens to be most applications. You&#039;ll learn this because&lt;br /&gt;
the OS guys did this first.&lt;br /&gt;
&lt;br /&gt;
==== Graphics ====&lt;br /&gt;
&lt;br /&gt;
This part of the lecture should help you with the lab.  Its about graphics.&lt;br /&gt;
&lt;br /&gt;
We&#039;ve talked about some standard abstractions so far: files, processes, threads.&lt;br /&gt;
&lt;br /&gt;
However, the thing you really interact with is the keyboard, mouse, and&lt;br /&gt;
display. In the standard Unix model, these are not a part of the operating&lt;br /&gt;
system. They&#039;re implemented in an application.&lt;br /&gt;
&lt;br /&gt;
The Unix philosophy is that if you don&#039;t have to put it in the kernel,&lt;br /&gt;
don&#039;t put it there, or if you do, make it interchangeable.&lt;br /&gt;
&lt;br /&gt;
The standard way to do graphics in Unix is X-Windows, or X for short.&lt;br /&gt;
Before X there was the W system.  There was a Y system at one point,&lt;br /&gt;
as well as Sun NeWs.&lt;br /&gt;
&lt;br /&gt;
There was also a system called Display Postscript.  Postscript is a&lt;br /&gt;
fully fledged programming language.  Originally used for printers.&lt;br /&gt;
It was developed for laser printers, by a little company called Adobe.&lt;br /&gt;
When laser printers came out, they had really high resolutions.  It was&lt;br /&gt;
hard to get the data necessary to print a page to the printer fast&lt;br /&gt;
enough...  So postscript programs were sent to the printer.  In the&lt;br /&gt;
early days of the MacIntosh, the processor in the printer was more powerful&lt;br /&gt;
than the processor in the computer.  Postscript is a funny little language.&lt;br /&gt;
Its a post-fix operator language.  Instead of saying things like &amp;quot;4+5&amp;quot; you&lt;br /&gt;
say &amp;quot;4 5 +&amp;quot; -- you push them onto the stack, then run an operator on them.&lt;br /&gt;
The same with function calls.&lt;br /&gt;
&lt;br /&gt;
In the 80s, there were many competing technologies for how to do graphics&lt;br /&gt;
in the Unix world.  X won.  But Display Postscript also kind of won,&lt;br /&gt;
because Macs use Display PDF in a system called Quartz, which was&lt;br /&gt;
created as a successor to Display Postscript. Because Postscript was linear,&lt;br /&gt;
it was hard to parallelize.  PDF is easier to parallelize.&lt;br /&gt;
&lt;br /&gt;
NeXT was the one that used Display Postscript first... NeXT was founded by&lt;br /&gt;
Steve Jobs. OS X is Unix with Display PDF... And you can run X-Windows on top&lt;br /&gt;
of that.&lt;br /&gt;
&lt;br /&gt;
X-Windows lets you open windows on remote computers. The way you create a&lt;br /&gt;
window on your local computer is the same way that you open a window on a&lt;br /&gt;
remote computer, 1000s of miles away. X is based on something called the X&lt;br /&gt;
Window Protocol. It just happens to work locally as well (with some&lt;br /&gt;
optimization like shared memory), but the messages were designed to work well&lt;br /&gt;
over ethernet.&lt;br /&gt;
&lt;br /&gt;
This was created by folks that wanted to talk to hundreds of computers, such&lt;br /&gt;
as the supercomputer in another room... but they wanted to see the windows&lt;br /&gt;
on their own computer.&lt;br /&gt;
&lt;br /&gt;
Consider what you have to do to see a remote window in Windows. You fire up&lt;br /&gt;
Remote Desktop Client, and you get the whole desktop remotely. If you want to&lt;br /&gt;
do 10 computers, you end up with 10 windows with 10 desktops and 10 start&lt;br /&gt;
buttons. This difference is a result of X-Windows being designed for networks&lt;br /&gt;
and Windows being designed for one computer.&lt;br /&gt;
&lt;br /&gt;
The terminology for X-Windows is a bit backwards from what we&#039;re used to: The&lt;br /&gt;
server is what we mostly think of of as a client.  The server is what controls&lt;br /&gt;
access to the display: it runs where your display is to control your display,&lt;br /&gt;
mouse, keyboard... And to display a window, remotely or locally, you run a&lt;br /&gt;
program known as a client in X-Windows which connects over the network to&lt;br /&gt;
display a window on your X-Windows server.&lt;br /&gt;
&lt;br /&gt;
A funny thing about X is it took the abstraction to an extreme. The people who&lt;br /&gt;
created X-Windows didn&#039;t know anything about usability or graphics or art. The&lt;br /&gt;
original X-Windows tools were created by regular programmers. Technically&lt;br /&gt;
underneath its very nice.. But they knew they didn&#039;t know, so they made it so&lt;br /&gt;
the user could decide what it should look like themselves. So that you can&lt;br /&gt;
just switch out a few programs and things keep on working.&lt;br /&gt;
&lt;br /&gt;
This means that when you do things like moving your mouse to a window -- what&lt;br /&gt;
happens? Do you take focus or not? This is something known as click to focus.&lt;br /&gt;
In older X Systems, you could just point your mouse there, and focus followed.&lt;br /&gt;
This is potentially very efficient, but also very confusing if you&#039;re not used&lt;br /&gt;
to it... Or how do you handle key sequences, or minimize? Who decides how to&lt;br /&gt;
do this all? They had the idea of something called a Window Manager. This goes&lt;br /&gt;
back to X Servers providing the technical minimums so that you&#039;re not limited&lt;br /&gt;
to one behaviour. The Window Manager is just another X client, with some&lt;br /&gt;
special privileges so it can run anywhere. It could run 1000s of miles away.&lt;br /&gt;
&lt;br /&gt;
This is why on Linux there&#039;s Gnome, KDE, etc. There&#039;s Motif, GTK, QT, IceWM,&lt;br /&gt;
Aferstep, Blackbox, Sawfish, fvwm, twm. Etc. Other graphical toolkits too,&lt;br /&gt;
abstracted away. These choices are all available there because the X-Windows&lt;br /&gt;
people left it very open by not making the choice for us. This does make&lt;br /&gt;
things a little confusing at times, though, because each application could&lt;br /&gt;
have different assumptions.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Introduction&amp;diff=1449</id>
		<title>Introduction</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Introduction&amp;diff=1449"/>
		<updated>2007-09-18T00:45:57Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These notes are have not yet been reviewed for correctness.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
[[Class Outline#Introduction|Last class]], we began talking about turning the machine we have into the machine we want.&lt;br /&gt;
&lt;br /&gt;
What are some properties of the machine that we want?&lt;br /&gt;
&lt;br /&gt;
- usable / accessible&lt;br /&gt;
- stable / reliable&lt;br /&gt;
- functional (access to underlying resources)&lt;br /&gt;
- efficient&lt;br /&gt;
- customizable&lt;br /&gt;
- secure&lt;br /&gt;
- multitasking&lt;br /&gt;
- portability&lt;br /&gt;
&lt;br /&gt;
If you have a computer that doesn&#039;t let you access the hardware, eg. a&lt;br /&gt;
keyboard or video camera, it isn&#039;t very functional. Multitasking is slightly different than efficiency. &lt;br /&gt;
For portability, do you really want to have to rewrite applications to support&lt;br /&gt;
slight variations in the hardware such as different size hard disks and different&lt;br /&gt;
amounts of RAM?&lt;br /&gt;
&lt;br /&gt;
Operating systems don&#039;t do all of these perfectly, but they tend to do a lot&lt;br /&gt;
of these at least acceptably.&lt;br /&gt;
&lt;br /&gt;
If you look at the introduction to the text book, it talks about various&lt;br /&gt;
types of operatings systems.  Some of the operating systems we know about are:&lt;br /&gt;
Linux, Windows, MacOSX, VXWorks, QNX, MS-DOS, Soliaris/xBSD, OS/2, BeOS, VMS,&lt;br /&gt;
MVS, OS/370, AIX, etc.&lt;br /&gt;
&lt;br /&gt;
Linux isn&#039;t a variety of different operating systems to the same degree as the &lt;br /&gt;
different versions of windows, as most linuxes share some components, whereas windows tends not to.&lt;br /&gt;
&lt;br /&gt;
Of the list above, most of them are modern operating systems, except MS-DOS.&lt;br /&gt;
To be a &amp;quot;modern&amp;quot; OS, there are two major qualities:  Does it have protected&lt;br /&gt;
memory, and does it have pre-emptive multitasking?&lt;br /&gt;
&lt;br /&gt;
=== Protected Memory ===&lt;br /&gt;
&lt;br /&gt;
What is protected memory?  &lt;br /&gt;
&lt;br /&gt;
Student: A situation where each program and the operating system has its own memory, and the OS prevents&lt;br /&gt;
other programs from writing to another program&#039;s memory.  &lt;br /&gt;
&lt;br /&gt;
Dr. Somayaji:  Access mechanisms to avoid having one program overwrite another program&#039;s memory.&lt;br /&gt;
&lt;br /&gt;
This lets you have a situation where if one program crashes, you can just restart it.  Damage due to&lt;br /&gt;
memory overwrites is limited to one program.&lt;br /&gt;
&lt;br /&gt;
=== Preemptive Multi-tasking ===&lt;br /&gt;
&lt;br /&gt;
A way to have more than one program run at a time. Older machines were known&lt;br /&gt;
as batch machines and their operating systems were batch operating systems.&lt;br /&gt;
These ran tasks that took a long time to run. These were queued up, and run&lt;br /&gt;
one at a time in sequence.  These were typically things such things as payroll and  accounts receivable.&lt;br /&gt;
Usually these would be run overnight and the output, either on magnetic tape or a&lt;br /&gt;
stack of printouts which would be returned to the user in the morning.&lt;br /&gt;
&lt;br /&gt;
Preemptive multitasking - the OS enforces time sharing.&lt;br /&gt;
Co-operative multitasking  - each program lets others run.&lt;br /&gt;
&lt;br /&gt;
If you look at MS-DOS, there are batch files.  These are just a sequence of&lt;br /&gt;
commands to run.  It runs them and then returns when done.  &lt;br /&gt;
&lt;br /&gt;
If you want to run a GUI, however, a batch system is unlikely to be what you want, as&lt;br /&gt;
a GUI environment tends to be interactive. &lt;br /&gt;
&lt;br /&gt;
With the big iron in the old days they had big computers that would be sitting&lt;br /&gt;
mostly idle, except when running the batch jobs.  The idea of time sharing came along around then.&lt;br /&gt;
&lt;br /&gt;
=== Structure of a computer ===&lt;br /&gt;
&lt;br /&gt;
[[Image:Stored_program_architecture_1.png]]&lt;br /&gt;
Stored Program architecture&lt;br /&gt;
&lt;br /&gt;
The stored program architecture with today&#039;s computers is a bit of a fiction.&lt;br /&gt;
&lt;br /&gt;
The things the microprocessor does are significantly faster than the RAM&lt;br /&gt;
storage.  Modern computers have to wait for data from RAM. However this time is&lt;br /&gt;
dwarfed by the time spent waiting for I/O.  This is because I/O devices tend to be mechanical:&lt;br /&gt;
printers, hard disks, people at keyboards.&lt;br /&gt;
&lt;br /&gt;
This helped cause the idea &amp;quot;what if we had multiple users and let them share the CPU&amp;quot; to come&lt;br /&gt;
about. This is time-sharing. On modern computers, we do this too, but instead&lt;br /&gt;
of sharing with multiple users we run multiple programs for a single user --&lt;br /&gt;
multi-tasking.&lt;br /&gt;
&lt;br /&gt;
In older systems such as Win3.1 and MacOS 9, this was co-operative multi-tasking.&lt;br /&gt;
When things started running, they&#039;d hog the CPU until they decided they were&lt;br /&gt;
ready to give up the CPU.  &lt;br /&gt;
&lt;br /&gt;
There used to be a great feature in the Mac in the old days where if you held&lt;br /&gt;
down the mouse button, no networking would happen.  This was because the&lt;br /&gt;
program running at the time was hogging the CPU when the mouse button was pressed.&lt;br /&gt;
In pre-emptive multitasking, you get booted out periodically so that the&lt;br /&gt;
system can spend time paying attention to the network, to do animations, or&lt;br /&gt;
to let other applications run.  It spends a millisecond here, a millisecond there,&lt;br /&gt;
etc.  Instead of actually running simultaneously, they&#039;re periodically running, but &lt;br /&gt;
they seem to run simultaneously to the end-user.&lt;br /&gt;
&lt;br /&gt;
Sometimes you have 2 or more CPUs, but you have more than 2 things going on... &lt;br /&gt;
&lt;br /&gt;
=== Processes and the Kernel ===&lt;br /&gt;
&lt;br /&gt;
Processes are fundamentally the things that get multitasked and protected. It&lt;br /&gt;
is the abstraction of a running program. This is what makes an operating&lt;br /&gt;
system modern. In the old days, you had one memory space and the OS and its&lt;br /&gt;
applications were all sharing the CPU and memory. Now, with a process model,&lt;br /&gt;
there are barriers all over the place, and more importantly, something/someone&lt;br /&gt;
in charge governing the process. It&#039;s not a free-for all, it has a dictator,&lt;br /&gt;
and its name is the kernel.&lt;br /&gt;
&lt;br /&gt;
Kernel as in the center piece.&lt;br /&gt;
&lt;br /&gt;
Question to class: How many people have heard of the term Microkernel? &lt;br /&gt;
Not many hands.&lt;br /&gt;
&lt;br /&gt;
There are various terms that modify the term kernel such as monolithic kernel, microkernel,&lt;br /&gt;
picokernel, etc. These specify how much stuff is in the microkernel.&lt;br /&gt;
The idea is that the more code is in the kernel, the faster it goes, but&lt;br /&gt;
conversely, that the more code there is the risk of crashing is higher.&lt;br /&gt;
&lt;br /&gt;
All of the problem code goes into processes, as they can be restarted, and kept&lt;br /&gt;
out of the kernel.&lt;br /&gt;
&lt;br /&gt;
The debate about what is faster is not fully settled for technical and philosophical reasons.&lt;br /&gt;
Almost all operating systems on the list above are big kernels, not small ones.&lt;br /&gt;
&lt;br /&gt;
So if that&#039;s what a kernel is, how does a program fit into that?&lt;br /&gt;
If there&#039;s one program to rule them all, where do processes fit in?&lt;br /&gt;
The kernel decides who gets to run, that implements a priority scheme.&lt;br /&gt;
&lt;br /&gt;
Student:  &amp;quot;It got there first.  You start the computer, then the kernel gets in.  Everything has to&lt;br /&gt;
talk to it or it doesn&#039;t run...&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It gets to set the rules... that&#039;s sort of it...  &lt;br /&gt;
In unix, there&#039;s the idea of the init process.  It is first to run, and has&lt;br /&gt;
special responsibilities.  It is run using a regular binary, at system boot,&lt;br /&gt;
by the kernel.  This still doesn&#039;t tell us how the kernel keeps control of it.&lt;br /&gt;
&lt;br /&gt;
The kernel often keeps control by getting the hardware to help.  By loading first,&lt;br /&gt;
the kernel can setup the CPU and memory so that it has control. This&lt;br /&gt;
type of hardware assistance is generally available to the first code to&lt;br /&gt;
request it.&lt;br /&gt;
&lt;br /&gt;
=== Interrupts ===&lt;br /&gt;
&lt;br /&gt;
Interrupts -- what are they?  It&#039;s an alert to say something has to be done now.&lt;br /&gt;
&lt;br /&gt;
A CPU is running the programs, until something happens, like someone pressing&lt;br /&gt;
a key or a network packet arrives.  So an I/O device flags an interrupt.  The CPU&lt;br /&gt;
now has to stop and pay attention&lt;br /&gt;
&lt;br /&gt;
An interrupt is just a mechanism to allow the CPU to change contexts, to switch&lt;br /&gt;
from running one bit of code to another.  There&#039;s a standard set of interrupts&lt;br /&gt;
defined by the hardware.  Associated with each interrupt there&#039;s a bit of code.&lt;br /&gt;
When one interrupts happens, run its code, when another happens, run another.  &lt;br /&gt;
For example, for the keyboard, there&#039;s a routine to read a key from the keyboard, store it in&lt;br /&gt;
a buffer so its not overwritten when the next key is pressed, then returns.&lt;br /&gt;
&lt;br /&gt;
{|align=&amp;quot;right&amp;quot;&lt;br /&gt;
|[[Image:Stored_program_architecture_2.png]]&lt;br /&gt;
|}&lt;br /&gt;
Think of an interrupt as a little kid pulling at your pant leg.  It wants your attention now.&lt;br /&gt;
&lt;br /&gt;
The OS controls interrupts to control the CPU (and also what happens with RAM).&lt;br /&gt;
&lt;br /&gt;
Wait a second?  If the kernel can only control interrupts, how can it keep&lt;br /&gt;
general control if no interrupts happen?  The clock IO device!  It throws&lt;br /&gt;
interrupts too.&lt;br /&gt;
&lt;br /&gt;
As a part of the boot sequence, the kernel programs the clock to wake the&lt;br /&gt;
operating system up every, say, 100th of a second.  Call me!  So the OS can&lt;br /&gt;
then keep running and perform its tasks as it needs:&lt;br /&gt;
&amp;quot;Is everyone behaving nicely?  Do I need to kill anyone?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Virtual Memory ===&lt;br /&gt;
&lt;br /&gt;
A slight fly in the ointment: If you are a program, and want to take control,&lt;br /&gt;
how do you mount a rebellion?  You overwrite the interrupt table! This is&lt;br /&gt;
where protected memory comes in.  It prevents a regular program from doing this.&lt;br /&gt;
As a regular process, you often can&#039;t even see the interrupt table.&lt;br /&gt;
&lt;br /&gt;
How is that possible? Many schemes have been proposed for doing protected&lt;br /&gt;
memory.  Some variants will be spoken about, but the most widespread method &lt;br /&gt;
is something known as virtual memory.  Often tied into the concept of&lt;br /&gt;
virtual memory is the ability to use disk for memory too.&lt;br /&gt;
&lt;br /&gt;
The fundamental idea is that the address you think your instruction or variable is at in memory is&lt;br /&gt;
fictional/virtual.  Say you want to load from address 2000 and load it into a register, and you&lt;br /&gt;
have another program that want to do the same thing, are they doing the same thing?&lt;br /&gt;
Nope!  They have nothing to do with each other in a virtual memory model.&lt;br /&gt;
Both programs live in their own virtual worlds, and can&#039;t see each other.&lt;br /&gt;
The kernel, with the help of a little piece of hardware called the MMU &lt;br /&gt;
is able to give each process its own virtual view of memory.  It decides&lt;br /&gt;
how that&#039;s going to map to real memory as it sees it.&lt;br /&gt;
&lt;br /&gt;
So the kernel controls interrupts to control IO and it controls memory.&lt;br /&gt;
These are the two key controls.  If a kernel can&#039;t control these, it&lt;br /&gt;
can&#039;t properly provide protections (&amp;quot;It can&#039;t stop the rebellion&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Last class we talked about hypervisors.  The whole idea there is that the&lt;br /&gt;
kernel thinks that it has control of the interrupt table and the real&lt;br /&gt;
MMU are actually virtual ones, provided by the Hypervisor.&lt;br /&gt;
&lt;br /&gt;
So you can now run windows inside a window on Linux, OSX, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The difference between the various versions of windows:&lt;br /&gt;
&lt;br /&gt;
- Windows 95, Windows 98 implemented these ideas for some programs, but not all, and&lt;br /&gt;
could get around it easily.&lt;br /&gt;
- Windows 3.1 didn&#039;t have these.&lt;br /&gt;
- Windows XP, Vista are modern.&lt;br /&gt;
&lt;br /&gt;
There&#039;s one small problem with Vista and XP, however.  &lt;br /&gt;
This has to do with the nature of the processes:&lt;br /&gt;
&lt;br /&gt;
{|align=&amp;quot;right&amp;quot;&lt;br /&gt;
|[[Image:Processes_and_kernel.png]]&lt;br /&gt;
|}&lt;br /&gt;
To upgrade things, the kernel trusts some programs/users to allow them to&lt;br /&gt;
upgrade.  In windows, you tend to run as the user admin.  This means you&#039;re&lt;br /&gt;
running the equivalent of the unix root command.  The kernel listens to you&lt;br /&gt;
and does just about anything you want, including install programs. &lt;br /&gt;
&lt;br /&gt;
Say that cute Christmas animation.. which happens to install a keylogger&lt;br /&gt;
to send all your keystrokes to the other side of the world, so they can&lt;br /&gt;
log into your bank account.&lt;br /&gt;
&lt;br /&gt;
In unix, there&#039;s the concept of root users and non-root users.  Root can&lt;br /&gt;
ask for almost anything to be done, including change its code.  If you can&lt;br /&gt;
tell the kernel to load new code, you can pretty much do anything. As&lt;br /&gt;
an unpriviledged user, the kernel/OS say no.&lt;br /&gt;
&lt;br /&gt;
When people make fun of windows being insecure, its not a fundamental flaw&lt;br /&gt;
with the design of windows -- its a little broken, overly complex in some ways --&lt;br /&gt;
but certain design choices along the way in the name of usability, such as&lt;br /&gt;
running as admin users so that users don&#039;t need to be asked to do something&lt;br /&gt;
special to change settings, install software, upgrade, etc.&lt;br /&gt;
This is why we have the current spyware problem.&lt;br /&gt;
&lt;br /&gt;
Vista changes this slightly with the UAC (User Access Control) which runs&lt;br /&gt;
a regular user with full priviledges, but asks you whenever priviledged&lt;br /&gt;
operations need doing -- Yes/No.  And you just click on it.  But users&lt;br /&gt;
still click yes. &lt;br /&gt;
&lt;br /&gt;
And now there&#039;s easy ways to turn off UAC completely.  We&#039;ll talk about this more&lt;br /&gt;
later when we talk about security.&lt;br /&gt;
&lt;br /&gt;
=== System Calls ===&lt;br /&gt;
&lt;br /&gt;
How do you talk to a kernel?&lt;br /&gt;
&lt;br /&gt;
It&#039;s the dictator and you&#039;re a supplicant.  How do you make a timely request&lt;br /&gt;
to the kernel to ask it to please do something?  System calls!&lt;br /&gt;
&lt;br /&gt;
A system call is a standard mechanism for an application to talk to a kernel.&lt;br /&gt;
&lt;br /&gt;
A system call is NOT a function call.  In your APIs and the like, it may look like&lt;br /&gt;
a function call, may be wrapped in one.. But in implementation, they are very&lt;br /&gt;
different.&lt;br /&gt;
&lt;br /&gt;
In order for the kernel to be in control, it has to run with special&lt;br /&gt;
privileges and not give these to the user programs. There are various schemes,&lt;br /&gt;
but the common one is a 1-bit option: User mode, or supervisor mode. User mode&lt;br /&gt;
means that running as a regular program, you can&#039;t talk to the IO/interrupt&lt;br /&gt;
vectors or talk tot he MMU, but you can run instructions and access your own&lt;br /&gt;
memory. When you switch to supervisor mode, then everything is accessible. The&lt;br /&gt;
kernel runs in supervisor mode.&lt;br /&gt;
&lt;br /&gt;
So if you&#039;re cut off and can&#039;t see the kernel, how do you send it a message?&lt;br /&gt;
You might be able to write to a special place in memory that the kernel might&lt;br /&gt;
check periodically, but how do you get it to check now?  Normally the kernel&lt;br /&gt;
is invoked by interrupts.. So as a user program, to invoke the kernel,&lt;br /&gt;
you call an interrupt.  There are special instructions, software interrupts,&lt;br /&gt;
that are like a hardware interrupt, but software initiates them.  There are&lt;br /&gt;
interrupt tables just like for hardware.&lt;br /&gt;
&lt;br /&gt;
So the kernel can then look at the memory of the invoking user program when a&lt;br /&gt;
user program calls the system call. Remember, because of the memory&lt;br /&gt;
protections, you can&#039;t just jump into kernel code, so the only way in is via an&lt;br /&gt;
interrupt.&lt;br /&gt;
&lt;br /&gt;
Therefore, system calls cause interrupts to invoke the kernel.&lt;br /&gt;
&lt;br /&gt;
In the process of doing a system call, the system has to do a lot of &#039;paperwork&#039; &lt;br /&gt;
to change context.  System calls are expensive, very expensive.  This is&lt;br /&gt;
one of the things that tends to bound the performance of an operating system.&lt;br /&gt;
&lt;br /&gt;
Modern CPUs are so fast, shouldn&#039;t they be able to switch really fast?&lt;br /&gt;
Turns out the tricks used to make modern CPUs really fast are like those&lt;br /&gt;
used to make muscle cars -- they tend to go really fast in a straight line,&lt;br /&gt;
but when you want to turn, you have to slow down to nearly a stop.  Modern&lt;br /&gt;
CPUs are like that.&lt;br /&gt;
&lt;br /&gt;
Interrupts cause all partial work done in parallel by modern CPUs to be&lt;br /&gt;
thrown out, such as 10-20 or more instructions. The CPU has to fill the&lt;br /&gt;
pipelines and resume. This stuff happens at a level below that of what the&lt;br /&gt;
kernel runs at.  The kernel saves its registers before switching context,&lt;br /&gt;
so that it can resume later.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Using_the_Operating_System&amp;diff=1448</id>
		<title>Using the Operating System</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Using_the_Operating_System&amp;diff=1448"/>
		<updated>2007-09-18T00:45:39Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These notes are have not yet been reviewed for correctness.&lt;br /&gt;
&lt;br /&gt;
== Lecture 3: Using the Operating System ==&lt;br /&gt;
&lt;br /&gt;
=== Administrative ===&lt;br /&gt;
&lt;br /&gt;
==== Course Notes ====&lt;br /&gt;
The course webpage has been changed to point to this wiki.  Notes for lectures will continue to be posted here.&lt;br /&gt;
&lt;br /&gt;
We still need volunteers to take notes and put them up.  Don&#039;t forget the up to 3% bonus for doing so.&lt;br /&gt;
&lt;br /&gt;
==== Lab 1 ====&lt;br /&gt;
&lt;br /&gt;
Lab 1 will be up soon. Starting tomorrow, there are labs. Please show up.&lt;br /&gt;
&lt;br /&gt;
If you&#039;re clever, you can probably find most of the answers online.  Avoid looking up the answers&lt;br /&gt;
though, because you&#039;ll learn much more than the just answers if you explore&lt;br /&gt;
while using the computer.&lt;br /&gt;
&lt;br /&gt;
You&#039;ll do better on the tests if you do the labs.&lt;br /&gt;
&lt;br /&gt;
The point of this course is to build up a conceptual model of&lt;br /&gt;
how computers work.  This conceptual model is not made up of answers, its made up&lt;br /&gt;
of connections.  You&#039;ll start to make these connections by doing the labs.&lt;br /&gt;
&lt;br /&gt;
The Lab will posted as a PDF.  You can print it out to bring with it with you.&lt;br /&gt;
When you go to hand in the lab, print off your answers on a separate piece of paper.&lt;br /&gt;
Answers will be due in two weeks.&lt;br /&gt;
&lt;br /&gt;
All functioning lab machines are running Debian Linux 4.0 (etch).&lt;br /&gt;
They should be connected to the internet.  They should have a browser on them&lt;br /&gt;
called [IceWeasel http://en.wikipedia.org/wiki/Naming_conflict_between_Debian_and_Mozilla]. &lt;br /&gt;
Its really Mozilla Firefox.  Because Mozilla has trademarked the name Firefox,&lt;br /&gt;
in order to use their name, you have to use exactly their binary distribution.  Debian could&lt;br /&gt;
have had a waiver, but because Debian is about freedom, Debian didn&#039;t want&lt;br /&gt;
the users of Debian to be bound by the terms of the Firefox agreement.&lt;br /&gt;
&lt;br /&gt;
One thing you&#039;ll notice while studying operating systems is that there&#039;s a lot&lt;br /&gt;
of culture.  This is because users get used to a particular way of doing things.&lt;br /&gt;
For example, lots of us are probably used to Windows and how it works.  If&lt;br /&gt;
you changed the fundamentals of how Windows worked, many of us would be unhappy.&lt;br /&gt;
&lt;br /&gt;
Some of the things we&#039;ll be studying are based on decisions made long ago, often &lt;br /&gt;
arbitrarily of for a technical reason was true at the time.&lt;br /&gt;
Even if it was wrong the, or is now wrong, we&#039;re often stuck with it.&lt;br /&gt;
&lt;br /&gt;
We&#039;re going to look at some of the baggage in the operating systems as we progress in this course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the lab, we&#039;ll be given a set of questions in 2 parts.&lt;br /&gt;
&lt;br /&gt;
Part A we should be able to finish during the hour, give or take a few minutes.&lt;br /&gt;
If it takes you 3-4 hours, you&#039;re probably doing something wrong.&lt;br /&gt;
&lt;br /&gt;
Part B will take longer. Will need more research, and a bit more reading. You&lt;br /&gt;
should be working together. If you have trouble finding a buddy, talk to Dr.&lt;br /&gt;
S. Talk to each other to learn. The purpose isn&#039;t just about the right&lt;br /&gt;
answers, but more so about the operating systems.&lt;br /&gt;
&lt;br /&gt;
We&#039;re going to look at:&lt;br /&gt;
&lt;br /&gt;
- Processes, and how Unix deals with them.&lt;br /&gt;
- How are the parts of the system divided up.&lt;br /&gt;
- Dynamic libraries, where they are, where they fit in memory.&lt;br /&gt;
- What are the dependencies.&lt;br /&gt;
- How does the graphical subsystem fit in there.  X-Windows, the classical&lt;br /&gt;
Unix graphical environment.&lt;br /&gt;
- Practice on the command line. (If you&#039;re wondering where these fit in&lt;br /&gt;
in modern graphical environments, read Neal Stephenson&#039;s essay)&lt;br /&gt;
&lt;br /&gt;
Try to finish part A in the lab.  &lt;br /&gt;
Answers due in class 2 weeks from today.&lt;br /&gt;
&lt;br /&gt;
==== Term Paper ====&lt;br /&gt;
&lt;br /&gt;
For the Term Paper... You don&#039;t have to do a pure literature review.  You could&lt;br /&gt;
do an original operating system extension.  There&#039;s one caveat:  You&#039;ve got&lt;br /&gt;
to get what you&#039;re going to do with the professor first.  You have to tell&lt;br /&gt;
him a week before the outline is due Oct 22nd.&lt;br /&gt;
&lt;br /&gt;
What types of things is he thinking of: Say you wanted to implement a new&lt;br /&gt;
file-system? This is inherently more work, because you still have to give a&lt;br /&gt;
nice write-up. The report should still cite other work.&lt;br /&gt;
&lt;br /&gt;
All of us should have started the process by next week, even if its just&lt;br /&gt;
googling for 15 minutes. Just google and see what results come up. IF you&lt;br /&gt;
start now, you&#039;ll have time to pick a topic that you like, instead of the&lt;br /&gt;
first thing that comes along.  Its better to work on something you like,&lt;br /&gt;
instead of stuck reading papers you&#039;re not interested in.&lt;br /&gt;
&lt;br /&gt;
If you want to find good OS papers:&lt;br /&gt;
* USENIX association has a number of systems oriented conferences.&lt;br /&gt;
** OSDI &lt;br /&gt;
** USENIX Annual Technical Conference&lt;br /&gt;
** LISA&lt;br /&gt;
&lt;br /&gt;
=== Using the Operating System ===&lt;br /&gt;
&lt;br /&gt;
Chapter 2 looks at the programming model of an operating system. &lt;br /&gt;
The operating system provides certain abstractions to help programmers work with it.&lt;br /&gt;
&lt;br /&gt;
What are some examples of abstractions?&lt;br /&gt;
&lt;br /&gt;
==== Files ====&lt;br /&gt;
&lt;br /&gt;
A file is a metaphor.  What was the original metaphor?  The manilla&lt;br /&gt;
coloured folder that we put paper in.  Its interesting to note that a file is used&lt;br /&gt;
to hold many pages or documents, but that a computer file is a single document.  Instead, a directory&lt;br /&gt;
holds many files, which are each generally one document. &lt;br /&gt;
The metaphor hasn&#039;t made much sense for a long time, but it is still in use.&lt;br /&gt;
&lt;br /&gt;
What is a file?  &lt;br /&gt;
&lt;br /&gt;
A file is a bytestream you can read and write from.&lt;br /&gt;
&lt;br /&gt;
We also have an abstraction called a byte, 256 possible values, 0-255.&lt;br /&gt;
We as computer scientists think we can represent just about anything&lt;br /&gt;
with these.&lt;br /&gt;
&lt;br /&gt;
; file :  named bytestream(s). &lt;br /&gt;
&lt;br /&gt;
In modern operating systems there are potentially more than &lt;br /&gt;
one bytestream in a file.  When there is more than one bytestream, we&lt;br /&gt;
call this a forked file.&lt;br /&gt;
&lt;br /&gt;
An early operating system that used forked files was OS 9.&lt;br /&gt;
On a traditional system, you get a sequence of bytes&lt;br /&gt;
when you open a file.  In a forked file, when you read it, you get some data,&lt;br /&gt;
but there is also other data hanging around.  We&#039;ll talk about that later.&lt;br /&gt;
&lt;br /&gt;
The standard API calls for a file are:&lt;br /&gt;
&lt;br /&gt;
* open&lt;br /&gt;
* read&lt;br /&gt;
* write&lt;br /&gt;
* close&lt;br /&gt;
* seek&lt;br /&gt;
&lt;br /&gt;
As well as a other operations that one might need to perform on files, such as:&lt;br /&gt;
* truncate&lt;br /&gt;
* append - (seek to end of file and write)&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
Why open and close?  Why can&#039;t we just operate on a filename?  &lt;br /&gt;
Because it (usually) takes a long time to go through the filesystem to find the files.&lt;br /&gt;
Open and close are optimizations -- the abstraction is a stateful interface.&lt;br /&gt;
You start by using open to obtain some sort of &amp;quot;handle&amp;quot; representing the file, &lt;br /&gt;
and pass this &amp;quot;handle&amp;quot; value to read and write.  When you&#039;re done, closing the &lt;br /&gt;
file frees the resources allocated when opening the file.  On most systems&lt;br /&gt;
you can only have a specific number of files open at any given time.&lt;br /&gt;
&lt;br /&gt;
There are some filesystems where open and close don&#039;t do much of anything,&lt;br /&gt;
such as some networked filesystems.&lt;br /&gt;
&lt;br /&gt;
File represent storage, on disks... They&#039;re random access.  If they&#039;re random access&lt;br /&gt;
like RAM, why don&#039;t we access disks like we access RAM?   &lt;br /&gt;
Why couldn&#039;t we just allocate objects as we need them?  We could indeed do this, but&lt;br /&gt;
it turns out that there&#039;s a reason that we don&#039;t generally do this.&lt;br /&gt;
&lt;br /&gt;
The file interface is a procedural interface.  &lt;br /&gt;
&lt;br /&gt;
One nice things about files is they&#039;re a minimal functionality interface.  The concept of&lt;br /&gt;
minimal functionality is a recurring theme you&#039;ll find when we discuss  filesystems.  &lt;br /&gt;
&lt;br /&gt;
The abstraction used to interface the filesystem&lt;br /&gt;
shouldn&#039;t prohibit you from creating particular forms of applications.  If we chose &lt;br /&gt;
to use an object model, we&#039;d be implying you don&#039;t want to give arbitrary access to the&lt;br /&gt;
data on disk, as objects tend to encapsulate their data.&lt;br /&gt;
&lt;br /&gt;
The abstraction listed above is the minimal abstraction for efficiently&lt;br /&gt;
managing persistent storage (disks).&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t necessarily mean this is the most absolutely minimal abstraction.  &lt;br /&gt;
An even more minimal abstraction would be&lt;br /&gt;
to just treat storage devices as a bunch of fixed size blocks.  However, that&#039;s getting too low level,&lt;br /&gt;
 because now all programs have to worry about where they put files.&lt;br /&gt;
&lt;br /&gt;
Because the files abstract model is reasonably good, its stuck around for decades.&lt;br /&gt;
&lt;br /&gt;
Fundamentally, though, its a legacy.  Some models of filesystems try to get&lt;br /&gt;
away from it.  Look at the PalmOS - it resisted having files for a long time, but eventually&lt;br /&gt;
gave in to support removable media, but the primary OS and API still don&#039;t support files.  &lt;br /&gt;
Microsoft&#039;s been wanting to get away from the legacy files abstraction too, but &lt;br /&gt;
somehow it doesn&#039;t seem to happen.&lt;br /&gt;
&lt;br /&gt;
===== Processes and Threads =====&lt;br /&gt;
&lt;br /&gt;
There&#039;s lots of other devices, but from an OS level, there are two other big ones:&lt;br /&gt;
CPU and RAM.  These two are generally abstracted with processes.&lt;br /&gt;
The process is the basic abstraction in operating systems for these two,&lt;br /&gt;
but is not the only abstraction.  There are also threads.&lt;br /&gt;
&lt;br /&gt;
CPU + RAM are abstracted as:&lt;br /&gt;
* processes&lt;br /&gt;
* threads&lt;br /&gt;
&lt;br /&gt;
A process may have multiple threads.  A thread shares memory with its processes.&lt;br /&gt;
&lt;br /&gt;
* A process is an exclusive allocation of CPU and RAM.&lt;br /&gt;
* A thread is a non-exclusive allocation of RAM within a process,&lt;br /&gt;
but is an exclusive allocation of CPU.&lt;br /&gt;
* One or more threads constitute a process.&lt;br /&gt;
&lt;br /&gt;
Another way to talk about processes is in terms of address spaces and&lt;br /&gt;
execution context: &lt;br /&gt;
* An address space is just a virtual version of RAM. It may&lt;br /&gt;
be instantiated in physical memory, it may not be. Its a set of addresses you&lt;br /&gt;
can call your own. &lt;br /&gt;
* Execution context is CPU state (Registers, processor status&lt;br /&gt;
words, etc.). There&#039;s lots of state surrounding the processor when its running&lt;br /&gt;
a program.  This state can be saved, and then restored later to resume execution&lt;br /&gt;
at a later time.&lt;br /&gt;
&lt;br /&gt;
* A thread is one execution context matched with an address space.&lt;br /&gt;
* A process is one or more execution contexts plus an address space.&lt;br /&gt;
* A single-threaded process has one execution context, and one address space.&lt;br /&gt;
* A multithreaded process has multiple execution contexts, and one address space.&lt;br /&gt;
&lt;br /&gt;
The concept of multiple address spaces is somewhat new in modern computing.&lt;br /&gt;
However, if you go back to the old days of MS-DOS, there was only one address space, the&lt;br /&gt;
physical address space.  We used to have things like TSRs, a 640kb limit,&lt;br /&gt;
etc.  There was no virtualization of memory.  In order to run at the same&lt;br /&gt;
time, they had to co-exist in the physical memory address space.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have multiple address spaces, you don&#039;t have processes and threads.&lt;br /&gt;
At best, you have threads, sharing the one address space you have.&lt;br /&gt;
&lt;br /&gt;
Historically, threads were abstracted differently than now: These are&lt;br /&gt;
capitalized here to differentiate them from the newer terms: FORK, JOIN, QUIT.&lt;br /&gt;
&lt;br /&gt;
Why FORK?  Think of a fork in the road.  You&#039;re going along, then things split.&lt;br /&gt;
A FORK is supposed to represent that split.&lt;br /&gt;
&lt;br /&gt;
By FORKing, the main thing to note is that you&#039;re creating two execution&lt;br /&gt;
contexts, that may be sharing memory. The execution may start at the same&lt;br /&gt;
place, but may be diverging. How do you stop creating more and more and more&lt;br /&gt;
of these, to bring them back under control or stop them? That&#039;s the JOIN&lt;br /&gt;
operation. Each thread tracks how many threads are running, if you JOIN and&lt;br /&gt;
you&#039;re not the last one running, you just go away, otherwise you need to&lt;br /&gt;
synchronize back into the main thread.&lt;br /&gt;
&lt;br /&gt;
What&#039;s QUIT? QUIT stops the the whole program -- all execution. It will cut&lt;br /&gt;
all threads off, even if the thread is one of the branches and not the main&lt;br /&gt;
thread.&lt;br /&gt;
&lt;br /&gt;
This was one of the earliest ways to abstract multiple execution context.&lt;br /&gt;
&lt;br /&gt;
What if, when you did the fork, you made a copy of the entire process? There&lt;br /&gt;
are now two separate instances of the program, with the same state? The&lt;br /&gt;
difference here, is if you quit one, the other will stay around -- but the&lt;br /&gt;
difference is more profound: they&#039;re not sharing the same address space (nor&lt;br /&gt;
execution context). This is the Unix model of processes.&lt;br /&gt;
&lt;br /&gt;
In the Unix process model the system starts with only one process: init. It&lt;br /&gt;
starts running, then it creates a copy of itself with fork, then another, etc.&lt;br /&gt;
&lt;br /&gt;
[[Image:Comp3000-process-tree.png]]&lt;br /&gt;
&lt;br /&gt;
In this diagram, What is the value of &#039;&#039;x?&#039;&#039; on the bottom-left-most branch?&lt;br /&gt;
&#039;&#039;x&#039;&#039; is 5 in the Unix process model. However, if this was multithreaded,&lt;br /&gt;
&#039;&#039;x&#039;&#039; could be 7 or 5, depending on how fast the threads are running. It might&lt;br /&gt;
be 5 if the thread asking for the value of &#039;&#039;x&#039;&#039; runs before the thread setting &#039;&#039;x&#039;&#039;&lt;br /&gt;
to 7. This is known as a race condition, &lt;br /&gt;
because we don&#039;t know which thread will run or finish first.&lt;br /&gt;
&lt;br /&gt;
In Unix, they decided to make it easy and have different processes. These&lt;br /&gt;
processes can&#039;t change the state of their parents or children. &lt;br /&gt;
To share a value, you have to set the value before forking.  (Or through other means)&lt;br /&gt;
&lt;br /&gt;
There&#039;s a small glitch with what we&#039;ve said so far about Unix processes: That&lt;br /&gt;
they have exactly the same state when you fork. If this was true, they&#039;d&lt;br /&gt;
always do the same thing.  How do they know that they&#039;re different?&lt;br /&gt;
&lt;br /&gt;
Turns out that Unix fork is very simple, yet it helps with this.&lt;br /&gt;
The idiom you&#039;ll usually see is:&lt;br /&gt;
  &lt;br /&gt;
 pid = fork();&lt;br /&gt;
  &lt;br /&gt;
fork takes no arguments.&lt;br /&gt;
&lt;br /&gt;
When you fork, the result of fork is the pid (process ID) of the new process,&lt;br /&gt;
or 0 if you&#039;re the child.&lt;br /&gt;
&lt;br /&gt;
The tree of processes effectively becomes a family tree. (However, with some&lt;br /&gt;
bizarre genealogy that we&#039;ll see later)&lt;br /&gt;
&lt;br /&gt;
What you usually do is check the value of pid, and if its 0, do one thing,&lt;br /&gt;
otherwise do something else. If pid is nonzero, it is the pid of the child&lt;br /&gt;
process we just created by forking. You usually use this to track your child.&lt;br /&gt;
The classic use of fork is to create disposable children that do a specific&lt;br /&gt;
task for a short while, then go away.&lt;br /&gt;
&lt;br /&gt;
The nice thing about this model is that it keeps things separate. You don&#039;t&lt;br /&gt;
need to worry about what the child is doing. If you want to communicate, you&lt;br /&gt;
have to explicitly set up to do this. There are some standard ways of doing&lt;br /&gt;
that communication.  We&#039;ll look at these later too.&lt;br /&gt;
&lt;br /&gt;
So now we know how to make new processes? How do we do something different? In&lt;br /&gt;
principle we don&#039;t need anything else. We could open a file, read new code,&lt;br /&gt;
then jump to the new code.  However, we have the idea of exec(). &lt;br /&gt;
Exec  replaces the running program with the specified program, but preserves&lt;br /&gt;
the pid.&lt;br /&gt;
&lt;br /&gt;
In Unix, to start a new program you usually fork() then you exec() the desired&lt;br /&gt;
program on the child.  If you don&#039;t fork() first, then exec() will kill the&lt;br /&gt;
original process, replacing it with the program you called exec on.&lt;br /&gt;
&lt;br /&gt;
Exec causes the kernel to throw away the old address space, and give a new&lt;br /&gt;
address space, with the new binary.  The pid stays the same though.&lt;br /&gt;
&lt;br /&gt;
The Windows equivalent is CreateProcess()&lt;br /&gt;
&lt;br /&gt;
CreateProcess() takes lots of arguments about how to create the new&lt;br /&gt;
process (what to load, permissions, etc).  Fork takes none.  With fork(), &lt;br /&gt;
you can set things up yourself, and most of the settings will carry over to&lt;br /&gt;
the new program. (Including open files).  Note how different these two are.&lt;br /&gt;
&lt;br /&gt;
In Unix, you have the building blocks to do things, and you have to put them&lt;br /&gt;
together yourself. In Windows, you have the single API call to do them all at&lt;br /&gt;
once. Neither is strictly right or wrong.&lt;br /&gt;
&lt;br /&gt;
On older systems, when a big process was forked, everything was copied. On&lt;br /&gt;
newer systems, fork doesn&#039;t necessarily copy everything. With virtual memory&lt;br /&gt;
you can share much of the memory between two processes.&lt;br /&gt;
&lt;br /&gt;
In older APIs there was vfork() - suspend parent, fork, exec, then let the&lt;br /&gt;
parent and child both start to go again.  This idea avoided the copying when&lt;br /&gt;
the first thing you were going to do was exec.&lt;br /&gt;
&lt;br /&gt;
The basic idea to make this efficient is that the descriptions of the virtual&lt;br /&gt;
memory address spaces don&#039;t have to be mutually exclusive.  You could have&lt;br /&gt;
10 programs sharing portions of their address space -- such as the read-only&lt;br /&gt;
portions like the program code, but not the read-write portions.&lt;br /&gt;
&lt;br /&gt;
What if you didn&#039;t want to do an exec after forking? A classic one is a&lt;br /&gt;
daemon. One listening on the network for an incoming connection. When that&lt;br /&gt;
incoming request comes in, the main program can deal with the request, but it&lt;br /&gt;
would also have to keep checking for more requests at the same time. Instead,&lt;br /&gt;
in Unix the typically idiom is to fork off a child to process that connection,&lt;br /&gt;
and then go back and wait for more.&lt;br /&gt;
&lt;br /&gt;
You can have shared memory.  But the default model for processes is&lt;br /&gt;
that nothing is shared, but the threading model is everything is shared.&lt;br /&gt;
For threads you have to implement protections, but for processes, you have to&lt;br /&gt;
opt-in to share.&lt;br /&gt;
&lt;br /&gt;
Processes win out on reliability: fewer chances for errors.  You control&lt;br /&gt;
exactly what state is shared.&lt;br /&gt;
&lt;br /&gt;
Another thing we&#039;ll talk about later regarding threads versus processes is&lt;br /&gt;
how does this play on multiple cores?  This depends on the implementation,&lt;br /&gt;
and sometimes is a little tricky.&lt;br /&gt;
&lt;br /&gt;
Chapter two is talking about the model presented to the programmer.  An API&lt;br /&gt;
for your processes and threads to talk to the world.&lt;br /&gt;
&lt;br /&gt;
This course is fundamentally about how these things are implemented. Its&lt;br /&gt;
useful to know about these tricks, so that you know how the computer is used.&lt;br /&gt;
It turns out the same tricks are useful in lots of other circumstances. Such&lt;br /&gt;
as concurrency - when you have to deal with an application that has to deal&lt;br /&gt;
with this - which happens to be most applications. You&#039;ll learn this because&lt;br /&gt;
the OS guys did this first.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Graphics ====&lt;br /&gt;
&lt;br /&gt;
This part of the lecture should help you with the lab.  Its about graphics.&lt;br /&gt;
&lt;br /&gt;
We&#039;ve talked about some standard abstractions so far: files, processes, threads.&lt;br /&gt;
&lt;br /&gt;
However, the thing you really interact with is the keyboard, mouse, and&lt;br /&gt;
display. In the standard Unix model, these are not a part of the operating&lt;br /&gt;
system. They&#039;re implemented in an application.&lt;br /&gt;
&lt;br /&gt;
The Unix philosophy is that if you don&#039;t have to put it in the kernel,&lt;br /&gt;
don&#039;t put it there, or if you do, make it interchangeable.&lt;br /&gt;
&lt;br /&gt;
The standard way to do graphics in Unix is X-Windows, or X for short.&lt;br /&gt;
Before X there was the W system.  There was a Y system at one point,&lt;br /&gt;
as well as Sun NeWs.&lt;br /&gt;
&lt;br /&gt;
There was also a system called Display Postscript.  Postscript is a&lt;br /&gt;
fully fledged programming language.  Originally used for printers.&lt;br /&gt;
It was developed for laser printers, by a little company called Adobe.&lt;br /&gt;
When laser printers came out, they had really high resolutions.  It was&lt;br /&gt;
hard to get the data necessary to print a page to the printer fast&lt;br /&gt;
enough...  So postscript programs were sent to the printer.  In the&lt;br /&gt;
early days of the MacIntosh, the processor in the printer was more powerful&lt;br /&gt;
than the processor in the computer.  Postscript is a funny little language.&lt;br /&gt;
Its a post-fix operator language.  Instead of saying things like &amp;quot;4+5&amp;quot; you&lt;br /&gt;
say &amp;quot;4 5 +&amp;quot; -- you push them onto the stack, then run an operator on them.&lt;br /&gt;
The same with function calls.&lt;br /&gt;
&lt;br /&gt;
In the 80s, there were many competing technologies for how to do graphics&lt;br /&gt;
in the Unix world.  X won.  But Display Postscript also kind of won,&lt;br /&gt;
because Macs use Display PDF in a system called Quartz, which was&lt;br /&gt;
created as a successor to Display Postscript. Because Postscript was linear,&lt;br /&gt;
it was hard to parallelize.  PDF is easier to parallelize.&lt;br /&gt;
&lt;br /&gt;
NeXT was the one that used Display Postscript first... NeXT was founded by&lt;br /&gt;
Steve Jobs. OS X is Unix with Display PDF... And you can run X-Windows on top&lt;br /&gt;
of that.&lt;br /&gt;
&lt;br /&gt;
X-Windows lets you open windows on remote computers. The way you create a&lt;br /&gt;
window on your local computer is the same way that you open a window on a&lt;br /&gt;
remote computer, 1000s of miles away. X is based on something called the X&lt;br /&gt;
Window Protocol. It just happens to work locally as well (with some&lt;br /&gt;
optimization like shared memory), but the messages were designed to work well&lt;br /&gt;
over ethernet.&lt;br /&gt;
&lt;br /&gt;
This was created by folks that wanted to talk to hundreds of computers, such&lt;br /&gt;
as the supercomputer in another room... but they wanted to see the windows&lt;br /&gt;
on their own computer.&lt;br /&gt;
&lt;br /&gt;
Consider what you have to do to see a remote window in Windows. You fire up&lt;br /&gt;
Remote Desktop Client, and you get the whole desktop remotely. If you want to&lt;br /&gt;
do 10 computers, you end up with 10 windows with 10 desktops and 10 start&lt;br /&gt;
buttons. This difference is a result of X-Windows being designed for networks&lt;br /&gt;
and Windows being designed for one computer.&lt;br /&gt;
&lt;br /&gt;
The terminology for X-Windows is a bit backwards from what we&#039;re used to: The&lt;br /&gt;
server is what we mostly think of of as a client.  The server is what controls&lt;br /&gt;
access to the display: it runs where your display is to control your display,&lt;br /&gt;
mouse, keyboard... And to display a window, remotely or locally, you run a&lt;br /&gt;
program known as a client in X-Windows which connects over the network to&lt;br /&gt;
display a window on your X-Windows server.&lt;br /&gt;
&lt;br /&gt;
A funny thing about X is it took the abstraction to an extreme. The people who&lt;br /&gt;
created X-Windows didn&#039;t know anything about usability or graphics or art. The&lt;br /&gt;
original X-Windows tools were created by regular programmers. Technically&lt;br /&gt;
underneath its very nice.. But they knew they didn&#039;t know, so they made it so&lt;br /&gt;
the user could decide what it should look like themselves. So that you can&lt;br /&gt;
just switch out a few programs and things keep on working.&lt;br /&gt;
&lt;br /&gt;
This means that when you do things like moving your mouse to a window -- what&lt;br /&gt;
happens? Do you take focus or not? This is something known as click to focus.&lt;br /&gt;
In older X Systems, you could just point your mouse there, and focus followed.&lt;br /&gt;
This is potentially very efficient, but also very confusing if you&#039;re not used&lt;br /&gt;
to it... Or how do you handle key sequences, or minimize? Who decides how to&lt;br /&gt;
do this all? They had the idea of something called a Window Manager. This goes&lt;br /&gt;
back to X Servers providing the technical minimums so that you&#039;re not limited&lt;br /&gt;
to one behaviour. The Window Manager is just another X client, with some&lt;br /&gt;
special privileges so it can run anywhere. It could run 1000s of miles away.&lt;br /&gt;
&lt;br /&gt;
This is why on Linux there&#039;s Gnome, KDE, etc. There&#039;s Motif, GTK, QT, IceWM,&lt;br /&gt;
Aferstep, Blackbox, Sawfish, fvwm, twm. Etc. Other graphical toolkits too,&lt;br /&gt;
abstracted away. These choices are all available there because the X-Windows&lt;br /&gt;
people left it very open by not making the choice for us. This does make&lt;br /&gt;
things a little confusing at times, though, because each application could&lt;br /&gt;
have different assumptions.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Using_the_Operating_System&amp;diff=1447</id>
		<title>Using the Operating System</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Using_the_Operating_System&amp;diff=1447"/>
		<updated>2007-09-18T00:42:27Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These notes are still in the process of being posted.&lt;br /&gt;
&lt;br /&gt;
== Lecture 3: Using the Operating System ==&lt;br /&gt;
&lt;br /&gt;
=== Administrative ===&lt;br /&gt;
&lt;br /&gt;
==== Course Notes ====&lt;br /&gt;
The course webpage has been changed to point to this wiki.  Notes for lectures will continue to be posted here.&lt;br /&gt;
&lt;br /&gt;
We still need volunteers to take notes and put them up.  Don&#039;t forget the up to 3% bonus for doing so.&lt;br /&gt;
&lt;br /&gt;
==== Lab 1 ====&lt;br /&gt;
&lt;br /&gt;
Lab 1 will be up soon. Starting tomorrow, there are labs. Please show up.&lt;br /&gt;
&lt;br /&gt;
If you&#039;re clever, you can probably find most of the answers online.  Avoid looking up the answers&lt;br /&gt;
though, because you&#039;ll learn much more than the just answers if you explore&lt;br /&gt;
while using the computer.&lt;br /&gt;
&lt;br /&gt;
You&#039;ll do better on the tests if you do the labs.&lt;br /&gt;
&lt;br /&gt;
The point of this course is to build up a conceptual model of&lt;br /&gt;
how computers work.  This conceptual model is not made up of answers, its made up&lt;br /&gt;
of connections.  You&#039;ll start to make these connections by doing the labs.&lt;br /&gt;
&lt;br /&gt;
The Lab will posted as a PDF.  You can print it out to bring with it with you.&lt;br /&gt;
When you go to hand in the lab, print off your answers on a separate piece of paper.&lt;br /&gt;
Answers will be due in two weeks.&lt;br /&gt;
&lt;br /&gt;
All functioning lab machines are running Debian Linux 4.0 (etch).&lt;br /&gt;
They should be connected to the internet.  They should have a browser on them&lt;br /&gt;
called [IceWeasel http://en.wikipedia.org/wiki/Naming_conflict_between_Debian_and_Mozilla]. &lt;br /&gt;
Its really Mozilla Firefox.  Because Mozilla has trademarked the name Firefox,&lt;br /&gt;
in order to use their name, you have to use exactly their binary distribution.  Debian could&lt;br /&gt;
have had a waiver, but because Debian is about freedom, Debian didn&#039;t want&lt;br /&gt;
the users of Debian to be bound by the terms of the Firefox agreement.&lt;br /&gt;
&lt;br /&gt;
One thing you&#039;ll notice while studying operating systems is that there&#039;s a lot&lt;br /&gt;
of culture.  This is because users get used to a particular way of doing things.&lt;br /&gt;
For example, lots of us are probably used to Windows and how it works.  If&lt;br /&gt;
you changed the fundamentals of how Windows worked, many of us would be unhappy.&lt;br /&gt;
&lt;br /&gt;
Some of the things we&#039;ll be studying are based on decisions made long ago, often &lt;br /&gt;
arbitrarily of for a technical reason was true at the time.&lt;br /&gt;
Even if it was wrong the, or is now wrong, we&#039;re often stuck with it.&lt;br /&gt;
&lt;br /&gt;
We&#039;re going to look at some of the baggage in the operating systems as we progress in this course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the lab, we&#039;ll be given a set of questions in 2 parts.&lt;br /&gt;
&lt;br /&gt;
Part A we should be able to finish during the hour, give or take a few minutes.&lt;br /&gt;
If it takes you 3-4 hours, you&#039;re probably doing something wrong.&lt;br /&gt;
&lt;br /&gt;
Part B will take longer. Will need more research, and a bit more reading. You&lt;br /&gt;
should be working together. If you have trouble finding a buddy, talk to Dr.&lt;br /&gt;
S. Talk to each other to learn. The purpose isn&#039;t just about the right&lt;br /&gt;
answers, but more so about the operating systems.&lt;br /&gt;
&lt;br /&gt;
We&#039;re going to look at:&lt;br /&gt;
&lt;br /&gt;
- Processes, and how Unix deals with them.&lt;br /&gt;
- How are the parts of the system divided up.&lt;br /&gt;
- Dynamic libraries, where they are, where they fit in memory.&lt;br /&gt;
- What are the dependencies.&lt;br /&gt;
- How does the graphical subsystem fit in there.  X-Windows, the classical&lt;br /&gt;
Unix graphical environment.&lt;br /&gt;
- Practice on the command line. (If you&#039;re wondering where these fit in&lt;br /&gt;
in modern graphical environments, read Neal Stephenson&#039;s essay)&lt;br /&gt;
&lt;br /&gt;
Try to finish part A in the lab.  &lt;br /&gt;
Answers due in class 2 weeks from today.&lt;br /&gt;
&lt;br /&gt;
==== Term Paper ====&lt;br /&gt;
&lt;br /&gt;
For the Term Paper... You don&#039;t have to do a pure literature review.  You could&lt;br /&gt;
do an original operating system extension.  There&#039;s one caveat:  You&#039;ve got&lt;br /&gt;
to get what you&#039;re going to do with the professor first.  You have to tell&lt;br /&gt;
him a week before the outline is due Oct 22nd.&lt;br /&gt;
&lt;br /&gt;
What types of things is he thinking of: Say you wanted to implement a new&lt;br /&gt;
file-system? This is inherently more work, because you still have to give a&lt;br /&gt;
nice write-up. The report should still cite other work.&lt;br /&gt;
&lt;br /&gt;
All of us should have started the process by next week, even if its just&lt;br /&gt;
googling for 15 minutes. Just google and see what results come up. IF you&lt;br /&gt;
start now, you&#039;ll have time to pick a topic that you like, instead of the&lt;br /&gt;
first thing that comes along.  Its better to work on something you like,&lt;br /&gt;
instead of stuck reading papers you&#039;re not interested in.&lt;br /&gt;
&lt;br /&gt;
If you want to find good OS papers:&lt;br /&gt;
* USENIX association has a number of systems oriented conferences.&lt;br /&gt;
** OSDI &lt;br /&gt;
** USENIX Annual Technical Conference&lt;br /&gt;
** LISA&lt;br /&gt;
&lt;br /&gt;
=== Using the Operating System ===&lt;br /&gt;
&lt;br /&gt;
Chapter 2 looks at the programming model of an operating system. &lt;br /&gt;
The operating system provides certain abstractions to help programmers work with it.&lt;br /&gt;
&lt;br /&gt;
What are some examples of abstractions?&lt;br /&gt;
&lt;br /&gt;
==== Files ====&lt;br /&gt;
&lt;br /&gt;
A file is a metaphor.  What was the original metaphor?  The manilla&lt;br /&gt;
coloured folder that we put paper in.  Its interesting to note that a file is used&lt;br /&gt;
to hold many pages or documents, but that a computer file is a single document.  Instead, a directory&lt;br /&gt;
holds many files, which are each generally one document. &lt;br /&gt;
The metaphor hasn&#039;t made much sense for a long time, but it is still in use.&lt;br /&gt;
&lt;br /&gt;
What is a file?  &lt;br /&gt;
&lt;br /&gt;
A file is a bytestream you can read and write from.&lt;br /&gt;
&lt;br /&gt;
We also have an abstraction called a byte, 256 possible values, 0-255.&lt;br /&gt;
We as computer scientists think we can represent just about anything&lt;br /&gt;
with these.&lt;br /&gt;
&lt;br /&gt;
; file :  named bytestream(s). &lt;br /&gt;
&lt;br /&gt;
In modern operating systems there are potentially more than &lt;br /&gt;
one bytestream in a file.  When there is more than one bytestream, we&lt;br /&gt;
call this a forked file.&lt;br /&gt;
&lt;br /&gt;
An early operating system that used forked files was OS 9.&lt;br /&gt;
On a traditional system, you get a sequence of bytes&lt;br /&gt;
when you open a file.  In a forked file, when you read it, you get some data,&lt;br /&gt;
but there is also other data hanging around.  We&#039;ll talk about that later.&lt;br /&gt;
&lt;br /&gt;
The standard API calls for a file are:&lt;br /&gt;
&lt;br /&gt;
* open&lt;br /&gt;
* read&lt;br /&gt;
* write&lt;br /&gt;
* close&lt;br /&gt;
* seek&lt;br /&gt;
&lt;br /&gt;
As well as a other operations that one might need to perform on files, such as:&lt;br /&gt;
* truncate&lt;br /&gt;
* append - (seek to end of file and write)&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
Why open and close?  Why can&#039;t we just operate on a filename?  &lt;br /&gt;
Because it (usually) takes a long time to go through the filesystem to find the files.&lt;br /&gt;
Open and close are optimizations -- the abstraction is a stateful interface.&lt;br /&gt;
You start by using open to obtain some sort of &amp;quot;handle&amp;quot; representing the file, &lt;br /&gt;
and pass this &amp;quot;handle&amp;quot; value to read and write.  When you&#039;re done, closing the &lt;br /&gt;
file frees the resources allocated when opening the file.  On most systems&lt;br /&gt;
you can only have a specific number of files open at any given time.&lt;br /&gt;
&lt;br /&gt;
There are some filesystems where open and close don&#039;t do much of anything,&lt;br /&gt;
such as some networked filesystems.&lt;br /&gt;
&lt;br /&gt;
File represent storage, on disks... They&#039;re random access.  If they&#039;re random access&lt;br /&gt;
like RAM, why don&#039;t we access disks like we access RAM?   &lt;br /&gt;
Why couldn&#039;t we just allocate objects as we need them?  We could indeed do this, but&lt;br /&gt;
it turns out that there&#039;s a reason that we don&#039;t generally do this.&lt;br /&gt;
&lt;br /&gt;
The file interface is a procedural interface.  &lt;br /&gt;
&lt;br /&gt;
One nice things about files is they&#039;re a minimal functionality interface.  The concept of&lt;br /&gt;
minimal functionality is a recurring theme you&#039;ll find when we discuss  filesystems.  &lt;br /&gt;
&lt;br /&gt;
The abstraction used to interface the filesystem&lt;br /&gt;
shouldn&#039;t prohibit you from creating particular forms of applications.  If we chose &lt;br /&gt;
to use an object model, we&#039;d be implying you don&#039;t want to give arbitrary access to the&lt;br /&gt;
data on disk, as objects tend to encapsulate their data.&lt;br /&gt;
&lt;br /&gt;
The abstraction listed above is the minimal abstraction for efficiently&lt;br /&gt;
managing persistent storage (disks).&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t necessarily mean this is the most absolutely minimal abstraction.  &lt;br /&gt;
An even more minimal abstraction would be&lt;br /&gt;
to just treat storage devices as a bunch of fixed size blocks.  However, that&#039;s getting too low level,&lt;br /&gt;
 because now all programs have to worry about where they put files.&lt;br /&gt;
&lt;br /&gt;
Because the files abstract model is reasonably good, its stuck around for decades.&lt;br /&gt;
&lt;br /&gt;
Fundamentally, though, its a legacy.  Some models of filesystems try to get&lt;br /&gt;
away from it.  Look at the PalmOS - it resisted having files for a long time, but eventually&lt;br /&gt;
gave in to support removable media, but the primary OS and API still don&#039;t support files.  &lt;br /&gt;
Microsoft&#039;s been wanting to get away from the legacy files abstraction too, but &lt;br /&gt;
somehow it doesn&#039;t seem to happen.&lt;br /&gt;
&lt;br /&gt;
===== Processes and Threads =====&lt;br /&gt;
&lt;br /&gt;
There&#039;s lots of other devices, but from an OS level, there are two other big ones:&lt;br /&gt;
CPU and RAM.  These two are generally abstracted with processes.&lt;br /&gt;
The process is the basic abstraction in operating systems for these two,&lt;br /&gt;
but is not the only abstraction.  There are also threads.&lt;br /&gt;
&lt;br /&gt;
CPU + RAM are abstracted as:&lt;br /&gt;
* processes&lt;br /&gt;
* threads&lt;br /&gt;
&lt;br /&gt;
A process may have multiple threads.  A thread shares memory with its processes.&lt;br /&gt;
&lt;br /&gt;
* A process is an exclusive allocation of CPU and RAM.&lt;br /&gt;
* A thread is a non-exclusive allocation of RAM within a process,&lt;br /&gt;
but is an exclusive allocation of CPU.&lt;br /&gt;
* One or more threads constitute a process.&lt;br /&gt;
&lt;br /&gt;
Another way to talk about processes is in terms of address spaces and&lt;br /&gt;
execution context: &lt;br /&gt;
* An address space is just a virtual version of RAM. It may&lt;br /&gt;
be instantiated in physical memory, it may not be. Its a set of addresses you&lt;br /&gt;
can call your own. &lt;br /&gt;
* Execution context is CPU state (Registers, processor status&lt;br /&gt;
words, etc.). There&#039;s lots of state surrounding the processor when its running&lt;br /&gt;
a program.  This state can be saved, and then restored later to resume execution&lt;br /&gt;
at a later time.&lt;br /&gt;
&lt;br /&gt;
* A thread is one execution context matched with an address space.&lt;br /&gt;
* A process is one or more execution contexts plus an address space.&lt;br /&gt;
* A single-threaded process has one execution context, and one address space.&lt;br /&gt;
* A multithreaded process has multiple execution contexts, and one address space.&lt;br /&gt;
&lt;br /&gt;
The concept of multiple address spaces is somewhat new in modern computing.&lt;br /&gt;
However, if you go back to the old days of MS-DOS, there was only one address space, the&lt;br /&gt;
physical address space.  We used to have things like TSRs, a 640kb limit,&lt;br /&gt;
etc.  There was no virtualization of memory.  In order to run at the same&lt;br /&gt;
time, they had to co-exist in the physical memory address space.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have multiple address spaces, you don&#039;t have processes and threads.&lt;br /&gt;
At best, you have threads, sharing the one address space you have.&lt;br /&gt;
&lt;br /&gt;
Historically, threads were abstracted differently than now: These are&lt;br /&gt;
capitalized here to differentiate them from the newer terms: FORK, JOIN, QUIT.&lt;br /&gt;
&lt;br /&gt;
Why FORK?  Think of a fork in the road.  You&#039;re going along, then things split.&lt;br /&gt;
A FORK is supposed to represent that split.&lt;br /&gt;
&lt;br /&gt;
By FORKing, the main thing to note is that you&#039;re creating two execution&lt;br /&gt;
contexts, that may be sharing memory. The execution may start at the same&lt;br /&gt;
place, but may be diverging. How do you stop creating more and more and more&lt;br /&gt;
of these, to bring them back under control or stop them? That&#039;s the JOIN&lt;br /&gt;
operation. Each thread tracks how many threads are running, if you JOIN and&lt;br /&gt;
you&#039;re not the last one running, you just go away, otherwise you need to&lt;br /&gt;
synchronize back into the main thread.&lt;br /&gt;
&lt;br /&gt;
What&#039;s QUIT? QUIT stops the the whole program -- all execution. It will cut&lt;br /&gt;
all threads off, even if the thread is one of the branches and not the main&lt;br /&gt;
thread.&lt;br /&gt;
&lt;br /&gt;
This was one of the earliest ways to abstract multiple execution context.&lt;br /&gt;
&lt;br /&gt;
What if, when you did the fork, you made a copy of the entire process? There&lt;br /&gt;
are now two separate instances of the program, with the same state? The&lt;br /&gt;
difference here, is if you quit one, the other will stay around -- but the&lt;br /&gt;
difference is more profound: they&#039;re not sharing the same address space (nor&lt;br /&gt;
execution context). This is the Unix model of processes.&lt;br /&gt;
&lt;br /&gt;
In the Unix process model the system starts with only one process: init. It&lt;br /&gt;
starts running, then it creates a copy of itself with fork, then another, etc.&lt;br /&gt;
&lt;br /&gt;
[[Image:Comp3000-process-tree.png]]&lt;br /&gt;
&lt;br /&gt;
In this diagram, What is the value of &#039;&#039;x?&#039;&#039; on the bottom-left-most branch?&lt;br /&gt;
&#039;&#039;x&#039;&#039; is 5 in the Unix process model. However, if this was multithreaded,&lt;br /&gt;
&#039;&#039;x&#039;&#039; could be 7 or 5, depending on how fast the threads are running. It might&lt;br /&gt;
be 5 if the thread asking for the value of &#039;&#039;x&#039;&#039; runs before the thread setting &#039;&#039;x&#039;&#039;&lt;br /&gt;
to 7. This is known as a race condition, &lt;br /&gt;
because we don&#039;t know which thread will run or finish first.&lt;br /&gt;
&lt;br /&gt;
In Unix, they decided to make it easy and have different processes. These&lt;br /&gt;
processes can&#039;t change the state of their parents or children. &lt;br /&gt;
To share a value, you have to set the value before forking.  (Or through other means)&lt;br /&gt;
&lt;br /&gt;
There&#039;s a small glitch with what we&#039;ve said so far about Unix processes: That&lt;br /&gt;
they have exactly the same state when you fork. If this was true, they&#039;d&lt;br /&gt;
always do the same thing.  How do they know that they&#039;re different?&lt;br /&gt;
&lt;br /&gt;
Turns out that Unix fork is very simple, yet it helps with this.&lt;br /&gt;
The idiom you&#039;ll usually see is:&lt;br /&gt;
  &lt;br /&gt;
 pid = fork();&lt;br /&gt;
  &lt;br /&gt;
fork takes no arguments.&lt;br /&gt;
&lt;br /&gt;
When you fork, the result of fork is the pid (process ID) of the new process,&lt;br /&gt;
or 0 if you&#039;re the child.&lt;br /&gt;
&lt;br /&gt;
The tree of processes effectively becomes a family tree. (However, with some&lt;br /&gt;
bizarre genealogy that we&#039;ll see later)&lt;br /&gt;
&lt;br /&gt;
What you usually do is check the value of pid, and if its 0, do one thing,&lt;br /&gt;
otherwise do something else. If pid is nonzero, it is the pid of the child&lt;br /&gt;
process we just created by forking. You usually use this to track your child.&lt;br /&gt;
The classic use of fork is to create disposable children that do a specific&lt;br /&gt;
task for a short while, then go away.&lt;br /&gt;
&lt;br /&gt;
The nice thing about this model is that it keeps things separate. You don&#039;t&lt;br /&gt;
need to worry about what the child is doing. If you want to communicate, you&lt;br /&gt;
have to explicitly set up to do this. There are some standard ways of doing&lt;br /&gt;
that communication.  We&#039;ll look at these later too.&lt;br /&gt;
&lt;br /&gt;
So now we know how to make new processes? How do we do something different? In&lt;br /&gt;
principle we don&#039;t need anything else. We could open a file, read new code,&lt;br /&gt;
then jump to the new code.  However, we have the idea of exec(). &lt;br /&gt;
Exec  replaces the running program with the specified program, but preserves&lt;br /&gt;
the pid.&lt;br /&gt;
&lt;br /&gt;
In Unix, to start a new program you usually fork() then you exec() the desired&lt;br /&gt;
program on the child.  If you don&#039;t fork() first, then exec() will kill the&lt;br /&gt;
original process, replacing it with the program you called exec on.&lt;br /&gt;
&lt;br /&gt;
Exec causes the kernel to throw away the old address space, and give a new&lt;br /&gt;
address space, with the new binary.  The pid stays the same though.&lt;br /&gt;
&lt;br /&gt;
The Windows equivalent is CreateProcess()&lt;br /&gt;
&lt;br /&gt;
CreateProcess() takes lots of arguments about how to create the new&lt;br /&gt;
process (what to load, permissions, etc).  Fork takes none.  With fork(), &lt;br /&gt;
you can set things up yourself, and most of the settings will carry over to&lt;br /&gt;
the new program. (Including open files).  Note how different these two are.&lt;br /&gt;
&lt;br /&gt;
In Unix, you have the building blocks to do things, and you have to put them&lt;br /&gt;
together yourself. In Windows, you have the single API call to do them all at&lt;br /&gt;
once. Neither is strictly right or wrong.&lt;br /&gt;
&lt;br /&gt;
On older systems, when a big process was forked, everything was copied. On&lt;br /&gt;
newer systems, fork doesn&#039;t necessarily copy everything. With virtual memory&lt;br /&gt;
you can share much of the memory between two processes.&lt;br /&gt;
&lt;br /&gt;
In older APIs there was vfork() - suspend parent, fork, exec, then let the&lt;br /&gt;
parent and child both start to go again.  This idea avoided the copying when&lt;br /&gt;
the first thing you were going to do was exec.&lt;br /&gt;
&lt;br /&gt;
The basic idea to make this efficient is that the descriptions of the virtual&lt;br /&gt;
memory address spaces don&#039;t have to be mutually exclusive.  You could have&lt;br /&gt;
10 programs sharing portions of their address space -- such as the read-only&lt;br /&gt;
portions like the program code, but not the read-write portions.&lt;br /&gt;
&lt;br /&gt;
What if you didn&#039;t want to do an exec after forking? A classic one is a&lt;br /&gt;
daemon. One listening on the network for an incoming connection. When that&lt;br /&gt;
incoming request comes in, the main program can deal with the request, but it&lt;br /&gt;
would also have to keep checking for more requests at the same time. Instead,&lt;br /&gt;
in Unix the typically idiom is to fork off a child to process that connection,&lt;br /&gt;
and then go back and wait for more.&lt;br /&gt;
&lt;br /&gt;
You can have shared memory.  But the default model for processes is&lt;br /&gt;
that nothing is shared, but the threading model is everything is shared.&lt;br /&gt;
For threads you have to implement protections, but for processes, you have to&lt;br /&gt;
opt-in to share.&lt;br /&gt;
&lt;br /&gt;
Processes win out on reliability: fewer chances for errors.  You control&lt;br /&gt;
exactly what state is shared.&lt;br /&gt;
&lt;br /&gt;
Another thing we&#039;ll talk about later regarding threads versus processes is&lt;br /&gt;
how does this play on multiple cores?  This depends on the implementation,&lt;br /&gt;
and sometimes is a little tricky.&lt;br /&gt;
&lt;br /&gt;
Chapter two is talking about the model presented to the programmer.  An API&lt;br /&gt;
for your processes and threads to talk to the world.&lt;br /&gt;
&lt;br /&gt;
This course is fundamentally about how these things are implemented. Its&lt;br /&gt;
useful to know about these tricks, so that you know how the computer is used.&lt;br /&gt;
It turns out the same tricks are useful in lots of other circumstances. Such&lt;br /&gt;
as concurrency - when you have to deal with an application that has to deal&lt;br /&gt;
with this - which happens to be most applications. You&#039;ll learn this because&lt;br /&gt;
the OS guys did this first.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Graphics ====&lt;br /&gt;
&lt;br /&gt;
This part of the lecture should help you with the lab.  Its about graphics.&lt;br /&gt;
&lt;br /&gt;
We&#039;ve talked about some standard abstractions so far: files, processes, threads.&lt;br /&gt;
&lt;br /&gt;
However, the thing you really interact with is the keyboard, mouse, and&lt;br /&gt;
display. In the standard Unix model, these are not a part of the operating&lt;br /&gt;
system. They&#039;re implemented in an application.&lt;br /&gt;
&lt;br /&gt;
The Unix philosophy is that if you don&#039;t have to put it in the kernel,&lt;br /&gt;
don&#039;t put it there, or if you do, make it interchangeable.&lt;br /&gt;
&lt;br /&gt;
The standard way to do graphics in Unix is X-Windows, or X for short.&lt;br /&gt;
Before X there was the W system.  There was a Y system at one point,&lt;br /&gt;
as well as Sun NeWs.&lt;br /&gt;
&lt;br /&gt;
There was also a system called Display Postscript.  Postscript is a&lt;br /&gt;
fully fledged programming language.  Originally used for printers.&lt;br /&gt;
It was developed for laser printers, by a little company called Adobe.&lt;br /&gt;
When laser printers came out, they had really high resolutions.  It was&lt;br /&gt;
hard to get the data necessary to print a page to the printer fast&lt;br /&gt;
enough...  So postscript programs were sent to the printer.  In the&lt;br /&gt;
early days of the MacIntosh, the processor in the printer was more powerful&lt;br /&gt;
than the processor in the computer.  Postscript is a funny little language.&lt;br /&gt;
Its a post-fix operator language.  Instead of saying things like &amp;quot;4+5&amp;quot; you&lt;br /&gt;
say &amp;quot;4 5 +&amp;quot; -- you push them onto the stack, then run an operator on them.&lt;br /&gt;
The same with function calls.&lt;br /&gt;
&lt;br /&gt;
In the 80s, there were many competing technologies for how to do graphics&lt;br /&gt;
in the Unix world.  X won.  But Display Postscript also kind of won,&lt;br /&gt;
because Macs use Display PDF in a system called Quartz, which was&lt;br /&gt;
created as a successor to Display Postscript. Because Postscript was linear,&lt;br /&gt;
it was hard to parallelize.  PDF is easier to parallelize.&lt;br /&gt;
&lt;br /&gt;
NeXT was the one that used Display Postscript first... NeXT was founded by&lt;br /&gt;
Steve Jobs. OS X is Unix with Display PDF... And you can run X-Windows on top&lt;br /&gt;
of that.&lt;br /&gt;
&lt;br /&gt;
X-Windows lets you open windows on remote computers. The way you create a&lt;br /&gt;
window on your local computer is the same way that you open a window on a&lt;br /&gt;
remote computer, 1000s of miles away. X is based on something called the X&lt;br /&gt;
Window Protocol. It just happens to work locally as well (with some&lt;br /&gt;
optimization like shared memory), but the messages were designed to work well&lt;br /&gt;
over ethernet.&lt;br /&gt;
&lt;br /&gt;
This was created by folks that wanted to talk to hundreds of computers, such&lt;br /&gt;
as the supercomputer in another room... but they wanted to see the windows&lt;br /&gt;
on their own computer.&lt;br /&gt;
&lt;br /&gt;
Consider what you have to do to see a remote window in Windows. You fire up&lt;br /&gt;
Remote Desktop Client, and you get the whole desktop remotely. If you want to&lt;br /&gt;
do 10 computers, you end up with 10 windows with 10 desktops and 10 start&lt;br /&gt;
buttons. This difference is a result of X-Windows being designed for networks&lt;br /&gt;
and Windows being designed for one computer.&lt;br /&gt;
&lt;br /&gt;
The terminology for X-Windows is a bit backwards from what we&#039;re used to: The&lt;br /&gt;
server is what we mostly think of of as a client.  The server is what controls&lt;br /&gt;
access to the display: it runs where your display is to control your display,&lt;br /&gt;
mouse, keyboard... And to display a window, remotely or locally, you run a&lt;br /&gt;
program known as a client in X-Windows which connects over the network to&lt;br /&gt;
display a window on your X-Windows server.&lt;br /&gt;
&lt;br /&gt;
A funny thing about X is it took the abstraction to an extreme. The people who&lt;br /&gt;
created X-Windows didn&#039;t know anything about usability or graphics or art. The&lt;br /&gt;
original X-Windows tools were created by regular programmers. Technically&lt;br /&gt;
underneath its very nice.. But they knew they didn&#039;t know, so they made it so&lt;br /&gt;
the user could decide what it should look like themselves. So that you can&lt;br /&gt;
just switch out a few programs and things keep on working.&lt;br /&gt;
&lt;br /&gt;
This means that when you do things like moving your mouse to a window -- what&lt;br /&gt;
happens? Do you take focus or not? This is something known as click to focus.&lt;br /&gt;
In older X Systems, you could just point your mouse there, and focus followed.&lt;br /&gt;
This is potentially very efficient, but also very confusing if you&#039;re not used&lt;br /&gt;
to it... Or how do you handle key sequences, or minimize? Who decides how to&lt;br /&gt;
do this all? They had the idea of something called a Window Manager. This goes&lt;br /&gt;
back to X Servers providing the technical minimums so that you&#039;re not limited&lt;br /&gt;
to one behaviour. The Window Manager is just another X client, with some&lt;br /&gt;
special privileges so it can run anywhere. It could run 1000s of miles away.&lt;br /&gt;
&lt;br /&gt;
This is why on Linux there&#039;s Gnome, KDE, etc. There&#039;s Motif, GTK, QT, IceWM,&lt;br /&gt;
Aferstep, Blackbox, Sawfish, fvwm, twm. Etc. Other graphical toolkits too,&lt;br /&gt;
abstracted away. These choices are all available there because the X-Windows&lt;br /&gt;
people left it very open by not making the choice for us. This does make&lt;br /&gt;
things a little confusing at times, though, because each application could&lt;br /&gt;
have different assumptions.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:Comp3000-process-tree.png&amp;diff=1446</id>
		<title>File:Comp3000-process-tree.png</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:Comp3000-process-tree.png&amp;diff=1446"/>
		<updated>2007-09-18T00:40:49Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: Process Tree&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Process Tree&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Using_the_Operating_System&amp;diff=1445</id>
		<title>Using the Operating System</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Using_the_Operating_System&amp;diff=1445"/>
		<updated>2007-09-18T00:16:53Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These notes are still in the process of being posted.&lt;br /&gt;
&lt;br /&gt;
== Lecture 3: Using the Operating System ==&lt;br /&gt;
&lt;br /&gt;
=== Administrative ===&lt;br /&gt;
&lt;br /&gt;
==== Course Notes ====&lt;br /&gt;
The course webpage has been changed to point to this wiki.  Notes for lectures will continue to be posted here.&lt;br /&gt;
&lt;br /&gt;
We still need volunteers to take notes and put them up.  Don&#039;t forget the up to 3% bonus for doing so.&lt;br /&gt;
&lt;br /&gt;
==== Lab 1 ====&lt;br /&gt;
&lt;br /&gt;
Lab 1 will be up soon. Starting tomorrow, there are labs. Please show up.&lt;br /&gt;
&lt;br /&gt;
If you&#039;re clever, you can probably find most of the answers online.  Avoid looking up the answers&lt;br /&gt;
though, because you&#039;ll learn much more than the just answers if you explore&lt;br /&gt;
while using the computer.&lt;br /&gt;
&lt;br /&gt;
You&#039;ll do better on the tests if you do the labs.&lt;br /&gt;
&lt;br /&gt;
The point of this course is to build up a conceptual model of&lt;br /&gt;
how computers work.  This conceptual model is not made up of answers, its made up&lt;br /&gt;
of connections.  You&#039;ll start to make these connections by doing the labs.&lt;br /&gt;
&lt;br /&gt;
The Lab will posted as a PDF.  You can print it out to bring with it with you.&lt;br /&gt;
When you go to hand in the lab, print off your answers on a separate piece of paper.&lt;br /&gt;
Answers will be due in two weeks.&lt;br /&gt;
&lt;br /&gt;
All functioning lab machines are running Debian Linux 4.0 (etch).&lt;br /&gt;
They should be connected to the internet.  They should have a browser on them&lt;br /&gt;
called [IceWeasel http://en.wikipedia.org/wiki/Naming_conflict_between_Debian_and_Mozilla]. &lt;br /&gt;
Its really Mozilla Firefox.  Because Mozilla has trademarked the name Firefox,&lt;br /&gt;
in order to use their name, you have to use exactly their binary distribution.  Debian could&lt;br /&gt;
have had a waiver, but because Debian is about freedom, Debian didn&#039;t want&lt;br /&gt;
the users of Debian to be bound by the terms of the Firefox agreement.&lt;br /&gt;
&lt;br /&gt;
One thing you&#039;ll notice while studying operating systems is that there&#039;s a lot&lt;br /&gt;
of culture.  This is because users get used to a particular way of doing things.&lt;br /&gt;
For example, lots of us are probably used to Windows and how it works.  If&lt;br /&gt;
you changed the fundamentals of how Windows worked, many of us would be unhappy.&lt;br /&gt;
&lt;br /&gt;
Some of the things we&#039;ll be studying are based on decisions made long ago, often &lt;br /&gt;
arbitrarily of for a technical reason was true at the time.&lt;br /&gt;
Even if it was wrong the, or is now wrong, we&#039;re often stuck with it.&lt;br /&gt;
&lt;br /&gt;
We&#039;re going to look at some of the baggage in the operating systems as we progress in this course.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the lab, we&#039;ll be given a set of questions in 2 parts.&lt;br /&gt;
&lt;br /&gt;
Part A we should be able to finish during the hour, give or take a few minutes.&lt;br /&gt;
If it takes you 3-4 hours, you&#039;re probably doing something wrong.&lt;br /&gt;
&lt;br /&gt;
Part B will take longer. Will need more research, and a bit more reading. You&lt;br /&gt;
should be working together. If you have trouble finding a buddy, talk to Dr.&lt;br /&gt;
S. Talk to each other to learn. The purpose isn&#039;t just about the right&lt;br /&gt;
answers, but more so about the operating systems.&lt;br /&gt;
&lt;br /&gt;
We&#039;re going to look at:&lt;br /&gt;
&lt;br /&gt;
- Processes, and how Unix deals with them.&lt;br /&gt;
- How are the parts of the system divided up.&lt;br /&gt;
- Dynamic libraries, where they are, where they fit in memory.&lt;br /&gt;
- What are the dependencies.&lt;br /&gt;
- How does the graphical subsystem fit in there.  X-Windows, the classical&lt;br /&gt;
Unix graphical environment.&lt;br /&gt;
- Practice on the command line. (If you&#039;re wondering where these fit in&lt;br /&gt;
in modern graphical environments, read Neal Stephenson&#039;s essay)&lt;br /&gt;
&lt;br /&gt;
Try to finish part A in the lab.  &lt;br /&gt;
Answers due in class 2 weeks from today.&lt;br /&gt;
&lt;br /&gt;
==== Term Paper ====&lt;br /&gt;
&lt;br /&gt;
For the Term Paper... You don&#039;t have to do a pure literature review.  You could&lt;br /&gt;
do an original operating system extension.  There&#039;s one caveat:  You&#039;ve got&lt;br /&gt;
to get what you&#039;re going to do with the professor first.  You have to tell&lt;br /&gt;
him a week before the outline is due Oct 22nd.&lt;br /&gt;
&lt;br /&gt;
What types of things is he thinking of: Say you wanted to implement a new&lt;br /&gt;
file-system? This is inherently more work, because you still have to give a&lt;br /&gt;
nice write-up. The report should still cite other work.&lt;br /&gt;
&lt;br /&gt;
All of us should have started the process by next week, even if its just&lt;br /&gt;
googling for 15 minutes. Just google and see what results come up. IF you&lt;br /&gt;
start now, you&#039;ll have time to pick a topic that you like, instead of the&lt;br /&gt;
first thing that comes along.  Its better to work on something you like,&lt;br /&gt;
instead of stuck reading papers you&#039;re not interested in.&lt;br /&gt;
&lt;br /&gt;
If you want to find good OS papers:&lt;br /&gt;
* USENIX association has a number of systems oriented conferences.&lt;br /&gt;
** OSDI &lt;br /&gt;
** USENIX Annual Technical Conference&lt;br /&gt;
** LISA&lt;br /&gt;
&lt;br /&gt;
=== Using the Operating System ===&lt;br /&gt;
&lt;br /&gt;
Chapter 2 looks at the programming model of an operating system. &lt;br /&gt;
The operating system provides certain abstractions to help programmers work with it.&lt;br /&gt;
&lt;br /&gt;
What are some examples of abstractions?&lt;br /&gt;
&lt;br /&gt;
==== Files ====&lt;br /&gt;
&lt;br /&gt;
A file is a metaphor.  What was the original metaphor?  The manilla&lt;br /&gt;
coloured folder that we put paper in.  Its interesting to note that a file is used&lt;br /&gt;
to hold many pages or documents, but that a computer file is a single document.  Instead, a directory&lt;br /&gt;
holds many files, which are each generally one document. &lt;br /&gt;
The metaphor hasn&#039;t made much sense for a long time, but it is still in use.&lt;br /&gt;
&lt;br /&gt;
What is a file?  &lt;br /&gt;
&lt;br /&gt;
A file is a bytestream you can read and write from.&lt;br /&gt;
&lt;br /&gt;
We also have an abstraction called a byte, 256 possible values, 0-255.&lt;br /&gt;
We as computer scientists think we can represent just about anything&lt;br /&gt;
with these.&lt;br /&gt;
&lt;br /&gt;
; file :  named bytestream(s). &lt;br /&gt;
&lt;br /&gt;
In modern operating systems there are potentially more than &lt;br /&gt;
one bytestream in a file.  When there is more than one bytestream, we&lt;br /&gt;
call this a forked file.&lt;br /&gt;
&lt;br /&gt;
An early operating system that used forked files was OS 9.&lt;br /&gt;
On a traditional system, you get a sequence of bytes&lt;br /&gt;
when you open a file.  In a forked file, when you read it, you get some data,&lt;br /&gt;
but there is also other data hanging around.  We&#039;ll talk about that later.&lt;br /&gt;
&lt;br /&gt;
The standard API calls for a file are:&lt;br /&gt;
&lt;br /&gt;
* open&lt;br /&gt;
* read&lt;br /&gt;
* write&lt;br /&gt;
* close&lt;br /&gt;
* seek&lt;br /&gt;
&lt;br /&gt;
As well as a other operations that one might need to perform on files, such as:&lt;br /&gt;
* truncate&lt;br /&gt;
* append - (seek to end of file and write)&lt;br /&gt;
* execute&lt;br /&gt;
&lt;br /&gt;
Why open and close?  Why can&#039;t we just operate on a filename?  &lt;br /&gt;
Because it (usually) takes a long time to go through the filesystem to find the files.&lt;br /&gt;
Open and close are optimizations -- the abstraction is a stateful interface.&lt;br /&gt;
You start by using open to obtain some sort of &amp;quot;handle&amp;quot; representing the file, &lt;br /&gt;
and pass this &amp;quot;handle&amp;quot; value to read and write.  When you&#039;re done, closing the &lt;br /&gt;
file frees the resources allocated when opening the file.  On most systems&lt;br /&gt;
you can only have a specific number of files open at any given time.&lt;br /&gt;
&lt;br /&gt;
There are some filesystems where open and close don&#039;t do much of anything,&lt;br /&gt;
such as some networked filesystems.&lt;br /&gt;
&lt;br /&gt;
File represent storage, on disks... They&#039;re random access.  If they&#039;re random access&lt;br /&gt;
like RAM, why don&#039;t we access disks like we access RAM?   &lt;br /&gt;
Why couldn&#039;t we just allocate objects as we need them?  We could indeed do this, but&lt;br /&gt;
it turns out that there&#039;s a reason that we don&#039;t generally do this.&lt;br /&gt;
&lt;br /&gt;
The file interface is a procedural interface.  &lt;br /&gt;
&lt;br /&gt;
One nice things about files is they&#039;re a minimal functionality interface.  The concept of&lt;br /&gt;
minimal functionality is a recurring theme you&#039;ll find when we discuss  filesystems.  &lt;br /&gt;
&lt;br /&gt;
The abstraction used to interface the filesystem&lt;br /&gt;
shouldn&#039;t prohibit you from creating particular forms of applications.  If we chose &lt;br /&gt;
to use an object model, we&#039;d be implying you don&#039;t want to give arbitrary access to the&lt;br /&gt;
data on disk, as objects tend to encapsulate their data.&lt;br /&gt;
&lt;br /&gt;
The abstraction listed above is the minimal abstraction for efficiently&lt;br /&gt;
managing persistent storage (disks).&lt;br /&gt;
&lt;br /&gt;
This doesn&#039;t necessarily mean this is the most absolutely minimal abstraction.  &lt;br /&gt;
An even more minimal abstraction would be&lt;br /&gt;
to just treat storage devices as a bunch of fixed size blocks.  However, that&#039;s getting too low level,&lt;br /&gt;
 because now all programs have to worry about where they put files.&lt;br /&gt;
&lt;br /&gt;
Because the files abstract model is reasonably good, its stuck around for decades.&lt;br /&gt;
&lt;br /&gt;
Fundamentally, though, its a legacy.  Some models of filesystems try to get&lt;br /&gt;
away from it.  Look at the PalmOS - it resisted having files for a long time, but eventually&lt;br /&gt;
gave in to support removable media, but the primary OS and API still don&#039;t support files.  &lt;br /&gt;
Microsoft&#039;s been wanting to get away from the legacy files abstraction too, but &lt;br /&gt;
somehow it doesn&#039;t seem to happen.&lt;br /&gt;
&lt;br /&gt;
===== Processes and Threads =====&lt;br /&gt;
&lt;br /&gt;
There&#039;s lots of other devices, but from an OS level, there are two other big ones:&lt;br /&gt;
CPU and RAM.  These two are generally abstracted with processes.&lt;br /&gt;
The process is the basic abstraction in operating systems for these two,&lt;br /&gt;
but is not the only abstraction.  There are also threads.&lt;br /&gt;
&lt;br /&gt;
CPU + RAM are abstracted as:&lt;br /&gt;
* processes&lt;br /&gt;
* threads&lt;br /&gt;
&lt;br /&gt;
A process may have multiple threads.  A thread shares memory with its processes.&lt;br /&gt;
&lt;br /&gt;
* A process is an exclusive allocation of CPU and RAM.&lt;br /&gt;
* A thread is a non-exclusive allocation of RAM within a process,&lt;br /&gt;
but is an exclusive allocation of CPU.&lt;br /&gt;
* One or more threads constitute a process.&lt;br /&gt;
&lt;br /&gt;
Another way to talk about processes is in terms of address spaces and&lt;br /&gt;
execution context: &lt;br /&gt;
* An address space is just a virtual version of RAM. It may&lt;br /&gt;
be instantiated in physical memory, it may not be. Its a set of addresses you&lt;br /&gt;
can call your own. &lt;br /&gt;
* Execution context is CPU state (Registers, processor status&lt;br /&gt;
words, etc.). There&#039;s lots of state surrounding the processor when its running&lt;br /&gt;
a program.  This state can be saved, and then restored later to resume execution&lt;br /&gt;
at a later time.&lt;br /&gt;
&lt;br /&gt;
* A thread is one execution context matched with an address space.&lt;br /&gt;
* A process is one or more execution contexts plus an address space.&lt;br /&gt;
* A single-threaded process has one execution context, and one address space.&lt;br /&gt;
* A multithreaded process has multiple execution contexts, and one address space.&lt;br /&gt;
&lt;br /&gt;
The concept of multiple address spaces is somewhat new in modern computing.&lt;br /&gt;
However, if you go back to the old days of MS-DOS, there was only one address space, the&lt;br /&gt;
physical address space.  We used to have things like TSRs, a 640kb limit,&lt;br /&gt;
etc.  There was no virtualization of memory.  In order to run at the same&lt;br /&gt;
time, they had to co-exist in the physical memory address space.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have multiple address spaces, you don&#039;t have processes and threads.&lt;br /&gt;
At best, you have threads, sharing the one address space you have.&lt;br /&gt;
&lt;br /&gt;
Historically, threads were abstracted differently than now: These are&lt;br /&gt;
capitalized here to differentiate them from the newer terms: FORK, JOIN, QUIT.&lt;br /&gt;
&lt;br /&gt;
Why FORK?  Think of a fork in the road.  You&#039;re going along, then things split.&lt;br /&gt;
A FORK is supposed to represent that split.&lt;br /&gt;
&lt;br /&gt;
By FORKing, the main thing to note is that you&#039;re creating two execution&lt;br /&gt;
contexts, that may be sharing memory. The execution may start at the same&lt;br /&gt;
place, but may be diverging. How do you stop creating more and more and more&lt;br /&gt;
of these, to bring them back under control or stop them? That&#039;s the JOIN&lt;br /&gt;
operation. Each thread tracks how many threads are running, if you JOIN and&lt;br /&gt;
you&#039;re not the last one running, you just go away, otherwise you need to&lt;br /&gt;
synchronize back into the main thread.&lt;br /&gt;
&lt;br /&gt;
What&#039;s QUIT? QUIT stops the the whole program -- all execution. It will cut&lt;br /&gt;
all threads off, even if the thread is one of the branches and not the main&lt;br /&gt;
thread.&lt;br /&gt;
&lt;br /&gt;
This was one of the earliest ways to abstract multiple execution context.&lt;br /&gt;
&lt;br /&gt;
What if, when you did the fork, you made a copy of the entire process? There&lt;br /&gt;
are now two separate instances of the program, with the same state? The&lt;br /&gt;
difference here, is if you quit one, the other will stay around -- but the&lt;br /&gt;
difference is more profound: they&#039;re not sharing the same address space (nor&lt;br /&gt;
execution context). This is the Unix model of processes.&lt;br /&gt;
&lt;br /&gt;
In the Unix process model the system starts with only one process: init. It&lt;br /&gt;
starts running, then it creates a copy of itself with fork, then another, etc.&lt;br /&gt;
&lt;br /&gt;
TODO: diagram&lt;br /&gt;
&lt;br /&gt;
In this diagram, What is the value of &#039;&#039;x?&#039;&#039; on the bottom-left-most branch?&lt;br /&gt;
&#039;&#039;x&#039;&#039; is 5 in the Unix process model. However, if this was multithreaded,&lt;br /&gt;
&#039;&#039;x&#039;&#039; could be 7 or 5, depending on how fast the threads are running. It might&lt;br /&gt;
be 5 if the thread asking for the value of &#039;&#039;x&#039;&#039; runs before the thread setting &#039;&#039;x&#039;&#039;&lt;br /&gt;
to 7. This is known as a race condition, &lt;br /&gt;
because we don&#039;t know which thread will run or finish first.&lt;br /&gt;
&lt;br /&gt;
In Unix, they decided to make it easy and have different processes. These&lt;br /&gt;
processes can&#039;t change the state of their parents or children. &lt;br /&gt;
To share a value, you have to set the value before forking.  (Or through other means)&lt;br /&gt;
&lt;br /&gt;
There&#039;s a small glitch with what we&#039;ve said so far about Unix processes: That&lt;br /&gt;
they have exactly the same state when you fork. If this was true, they&#039;d&lt;br /&gt;
always do the same thing.  How do they know that they&#039;re different?&lt;br /&gt;
&lt;br /&gt;
Turns out that Unix fork is very simple, yet it helps with this.&lt;br /&gt;
The idiom you&#039;ll usually see is:&lt;br /&gt;
  &lt;br /&gt;
 pid = fork();&lt;br /&gt;
  &lt;br /&gt;
fork takes no arguments.&lt;br /&gt;
&lt;br /&gt;
When you fork, the result of fork is the pid (process ID) of the new process,&lt;br /&gt;
or 0 if you&#039;re the child.&lt;br /&gt;
&lt;br /&gt;
The tree of processes effectively becomes a family tree. (However, with some&lt;br /&gt;
bizarre genealogy that we&#039;ll see later)&lt;br /&gt;
&lt;br /&gt;
What you usually do is check the value of pid, and if its 0, do one thing,&lt;br /&gt;
otherwise do something else. If pid is nonzero, it is the pid of the child&lt;br /&gt;
process we just created by forking. You usually use this to track your child.&lt;br /&gt;
The classic use of fork is to create disposable children that do a specific&lt;br /&gt;
task for a short while, then go away.&lt;br /&gt;
&lt;br /&gt;
The nice thing about this model is that it keeps things separate. You don&#039;t&lt;br /&gt;
need to worry about what the child is doing. If you want to communicate, you&lt;br /&gt;
have to explicitly set up to do this. There are some standard ways of doing&lt;br /&gt;
that communication.  We&#039;ll look at these later too.&lt;br /&gt;
&lt;br /&gt;
So now we know how to make new processes? How do we do something different? In&lt;br /&gt;
principle we don&#039;t need anything else. We could open a file, read new code,&lt;br /&gt;
then jump to the new code.  However, we have the idea of exec(). &lt;br /&gt;
Exec  replaces the running program with the specified program, but preserves&lt;br /&gt;
the pid.&lt;br /&gt;
&lt;br /&gt;
In Unix, to start a new program you usually fork() then you exec() the desired&lt;br /&gt;
program on the child.  If you don&#039;t fork() first, then exec() will kill the&lt;br /&gt;
original process, replacing it with the program you called exec on.&lt;br /&gt;
&lt;br /&gt;
Exec causes the kernel to throw away the old address space, and give a new&lt;br /&gt;
address space, with the new binary.  The pid stays the same though.&lt;br /&gt;
&lt;br /&gt;
The Windows equivalent is CreateProcess()&lt;br /&gt;
&lt;br /&gt;
CreateProcess() takes lots of arguments about how to create the new&lt;br /&gt;
process (what to load, permissions, etc).  Fork takes none.  With fork(), &lt;br /&gt;
you can set things up yourself, and most of the settings will carry over to&lt;br /&gt;
the new program. (Including open files).  Note how different these two are.&lt;br /&gt;
&lt;br /&gt;
In Unix, you have the building blocks to do things, and you have to put them&lt;br /&gt;
together yourself. In Windows, you have the single API call to do them all at&lt;br /&gt;
once. Neither is strictly right or wrong.&lt;br /&gt;
&lt;br /&gt;
On older systems, when a big process was forked, everything was copied. On&lt;br /&gt;
newer systems, fork doesn&#039;t necessarily copy everything. With virtual memory&lt;br /&gt;
you can share much of the memory between two processes.&lt;br /&gt;
&lt;br /&gt;
In older APIs there was vfork() - suspend parent, fork, exec, then let the&lt;br /&gt;
parent and child both start to go again.  This idea avoided the copying when&lt;br /&gt;
the first thing you were going to do was exec.&lt;br /&gt;
&lt;br /&gt;
The basic idea to make this efficient is that the descriptions of the virtual&lt;br /&gt;
memory address spaces don&#039;t have to be mutually exclusive.  You could have&lt;br /&gt;
10 programs sharing portions of their address space -- such as the read-only&lt;br /&gt;
portions like the program code, but not the read-write portions.&lt;br /&gt;
&lt;br /&gt;
What if you didn&#039;t want to do an exec after forking? A classic one is a&lt;br /&gt;
daemon. One listening on the network for an incoming connection. When that&lt;br /&gt;
incoming request comes in, the main program can deal with the request, but it&lt;br /&gt;
would also have to keep checking for more requests at the same time. Instead,&lt;br /&gt;
in Unix the typically idiom is to fork off a child to process that connection,&lt;br /&gt;
and then go back and wait for more.&lt;br /&gt;
&lt;br /&gt;
You can have shared memory.  But the default model for processes is&lt;br /&gt;
that nothing is shared, but the threading model is everything is shared.&lt;br /&gt;
For threads you have to implement protections, but for processes, you have to&lt;br /&gt;
opt-in to share.&lt;br /&gt;
&lt;br /&gt;
Processes win out on reliability: fewer chances for errors.  You control&lt;br /&gt;
exactly what state is shared.&lt;br /&gt;
&lt;br /&gt;
Another thing we&#039;ll talk about later regarding threads versus processes is&lt;br /&gt;
how does this play on multiple cores?  This depends on the implementation,&lt;br /&gt;
and sometimes is a little tricky.&lt;br /&gt;
&lt;br /&gt;
Chapter two is talking about the model presented to the programmer.  An API&lt;br /&gt;
for your processes and threads to talk to the world.&lt;br /&gt;
&lt;br /&gt;
This course is fundamentally about how these things are implemented. Its&lt;br /&gt;
useful to know about these tricks, so that you know how the computer is used.&lt;br /&gt;
It turns out the same tricks are useful in lots of other circumstances. Such&lt;br /&gt;
as concurrency - when you have to deal with an application that has to deal&lt;br /&gt;
with this - which happens to be most applications. You&#039;ll learn this because&lt;br /&gt;
the OS guys did this first.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Graphics ====&lt;br /&gt;
&lt;br /&gt;
This part of the lecture should help you with the lab.  Its about graphics.&lt;br /&gt;
&lt;br /&gt;
We&#039;ve talked about some standard abstractions so far: files, processes, threads.&lt;br /&gt;
&lt;br /&gt;
However, the thing you really interact with is the keyboard, mouse, and&lt;br /&gt;
display. In the standard Unix model, these are not a part of the operating&lt;br /&gt;
system. They&#039;re implemented in an application.&lt;br /&gt;
&lt;br /&gt;
The Unix philosophy is that if you don&#039;t have to put it in the kernel,&lt;br /&gt;
don&#039;t put it there, or if you do, make it interchangeable.&lt;br /&gt;
&lt;br /&gt;
The standard way to do graphics in Unix is X-Windows, or X for short.&lt;br /&gt;
Before X there was the W system.  There was a Y system at one point,&lt;br /&gt;
as well as Sun NeWs.&lt;br /&gt;
&lt;br /&gt;
There was also a system called Display Postscript.  Postscript is a&lt;br /&gt;
fully fledged programming language.  Originally used for printers.&lt;br /&gt;
It was developed for laser printers, by a little company called Adobe.&lt;br /&gt;
When laser printers came out, they had really high resolutions.  It was&lt;br /&gt;
hard to get the data necessary to print a page to the printer fast&lt;br /&gt;
enough...  So postscript programs were sent to the printer.  In the&lt;br /&gt;
early days of the MacIntosh, the processor in the printer was more powerful&lt;br /&gt;
than the processor in the computer.  Postscript is a funny little language.&lt;br /&gt;
Its a post-fix operator language.  Instead of saying things like &amp;quot;4+5&amp;quot; you&lt;br /&gt;
say &amp;quot;4 5 +&amp;quot; -- you push them onto the stack, then run an operator on them.&lt;br /&gt;
The same with function calls.&lt;br /&gt;
&lt;br /&gt;
In the 80s, there were many competing technologies for how to do graphics&lt;br /&gt;
in the Unix world.  X won.  But Display Postscript also kind of won,&lt;br /&gt;
because Macs use Display PDF in a system called Quartz, which was&lt;br /&gt;
created as a successor to Display Postscript. Because Postscript was linear,&lt;br /&gt;
it was hard to parallelize.  PDF is easier to parallelize.&lt;br /&gt;
&lt;br /&gt;
NeXT was the one that used Display Postscript first... NeXT was founded by&lt;br /&gt;
Steve Jobs. OS X is Unix with Display PDF... And you can run X-Windows on top&lt;br /&gt;
of that.&lt;br /&gt;
&lt;br /&gt;
X-Windows lets you open windows on remote computers. The way you create a&lt;br /&gt;
window on your local computer is the same way that you open a window on a&lt;br /&gt;
remote computer, 1000s of miles away. X is based on something called the X&lt;br /&gt;
Window Protocol. It just happens to work locally as well (with some&lt;br /&gt;
optimization like shared memory), but the messages were designed to work well&lt;br /&gt;
over ethernet.&lt;br /&gt;
&lt;br /&gt;
This was created by folks that wanted to talk to hundreds of computers, such&lt;br /&gt;
as the supercomputer in another room... but they wanted to see the windows&lt;br /&gt;
on their own computer.&lt;br /&gt;
&lt;br /&gt;
Consider what you have to do to see a remote window in Windows. You fire up&lt;br /&gt;
Remote Desktop Client, and you get the whole desktop remotely. If you want to&lt;br /&gt;
do 10 computers, you end up with 10 windows with 10 desktops and 10 start&lt;br /&gt;
buttons. This difference is a result of X-Windows being designed for networks&lt;br /&gt;
and Windows being designed for one computer.&lt;br /&gt;
&lt;br /&gt;
The terminology for X-Windows is a bit backwards from what we&#039;re used to: The&lt;br /&gt;
server is what we mostly think of of as a client.  The server is what controls&lt;br /&gt;
access to the display: it runs where your display is to control your display,&lt;br /&gt;
mouse, keyboard... And to display a window, remotely or locally, you run a&lt;br /&gt;
program known as a client in X-Windows which connects over the network to&lt;br /&gt;
display a window on your X-Windows server.&lt;br /&gt;
&lt;br /&gt;
A funny thing about X is it took the abstraction to an extreme. The people who&lt;br /&gt;
created X-Windows didn&#039;t know anything about usability or graphics or art. The&lt;br /&gt;
original X-Windows tools were created by regular programmers. Technically&lt;br /&gt;
underneath its very nice.. But they knew they didn&#039;t know, so they made it so&lt;br /&gt;
the user could decide what it should look like themselves. So that you can&lt;br /&gt;
just switch out a few programs and things keep on working.&lt;br /&gt;
&lt;br /&gt;
This means that when you do things like moving your mouse to a window -- what&lt;br /&gt;
happens? Do you take focus or not? This is something known as click to focus.&lt;br /&gt;
In older X Systems, you could just point your mouse there, and focus followed.&lt;br /&gt;
This is potentially very efficient, but also very confusing if you&#039;re not used&lt;br /&gt;
to it... Or how do you handle key sequences, or minimize? Who decides how to&lt;br /&gt;
do this all? They had the idea of something called a Window Manager. This goes&lt;br /&gt;
back to X Servers providing the technical minimums so that you&#039;re not limited&lt;br /&gt;
to one behaviour. The Window Manager is just another X client, with some&lt;br /&gt;
special privileges so it can run anywhere. It could run 1000s of miles away.&lt;br /&gt;
&lt;br /&gt;
This is why on Linux there&#039;s Gnome, KDE, etc. There&#039;s Motif, GTK, QT, IceWM,&lt;br /&gt;
Aferstep, Blackbox, Sawfish, fvwm, twm. Etc. Other graphical toolkits too,&lt;br /&gt;
abstracted away. These choices are all available there because the X-Windows&lt;br /&gt;
people left it very open by not making the choice for us. This does make&lt;br /&gt;
things a little confusing at times, though, because each application could&lt;br /&gt;
have different assumptions.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Introduction&amp;diff=1442</id>
		<title>Introduction</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Introduction&amp;diff=1442"/>
		<updated>2007-09-16T18:14:49Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
[[Class Outline#Introduction|Last class]], we began talking about turning the machine we have into the machine we want.&lt;br /&gt;
&lt;br /&gt;
What are some properties of the machine that we want?&lt;br /&gt;
&lt;br /&gt;
- usable / accessible&lt;br /&gt;
- stable / reliable&lt;br /&gt;
- functional (access to underlying resources)&lt;br /&gt;
- efficient&lt;br /&gt;
- customizable&lt;br /&gt;
- secure&lt;br /&gt;
- multitasking&lt;br /&gt;
- portability&lt;br /&gt;
&lt;br /&gt;
If you have a computer that doesn&#039;t let you access the hardware, eg. a&lt;br /&gt;
keyboard or video camera, it isn&#039;t very functional. Multitasking is slightly different than efficiency. &lt;br /&gt;
For portability, do you really want to have to rewrite applications to support&lt;br /&gt;
slight variations in the hardware such as different size hard disks and different&lt;br /&gt;
amounts of RAM?&lt;br /&gt;
&lt;br /&gt;
Operating systems don&#039;t do all of these perfectly, but they tend to do a lot&lt;br /&gt;
of these at least acceptably.&lt;br /&gt;
&lt;br /&gt;
If you look at the introduction to the text book, it talks about various&lt;br /&gt;
types of operatings systems.  Some of the operating systems we know about are:&lt;br /&gt;
Linux, Windows, MacOSX, VXWorks, QNX, MS-DOS, Soliaris/xBSD, OS/2, BeOS, VMS,&lt;br /&gt;
MVS, OS/370, AIX, etc.&lt;br /&gt;
&lt;br /&gt;
Linux isn&#039;t a variety of different operating systems to the same degree as the &lt;br /&gt;
different versions of windows, as most linuxes share some components, whereas windows tends not to.&lt;br /&gt;
&lt;br /&gt;
Of the list above, most of them are modern operating systems, except MS-DOS.&lt;br /&gt;
To be a &amp;quot;modern&amp;quot; OS, there are two major qualities:  Does it have protected&lt;br /&gt;
memory, and does it have pre-emptive multitasking?&lt;br /&gt;
&lt;br /&gt;
=== Protected Memory ===&lt;br /&gt;
&lt;br /&gt;
What is protected memory?  &lt;br /&gt;
&lt;br /&gt;
Student: A situation where each program and the operating system has its own memory, and the OS prevents&lt;br /&gt;
other programs from writing to another program&#039;s memory.  &lt;br /&gt;
&lt;br /&gt;
Dr. Somayaji:  Access mechanisms to avoid having one program overwrite another program&#039;s memory.&lt;br /&gt;
&lt;br /&gt;
This lets you have a situation where if one program crashes, you can just restart it.  Damage due to&lt;br /&gt;
memory overwrites is limited to one program.&lt;br /&gt;
&lt;br /&gt;
=== Preemptive Multi-tasking ===&lt;br /&gt;
&lt;br /&gt;
A way to have more than one program run at a time. Older machines were known&lt;br /&gt;
as batch machines and their operating systems were batch operating systems.&lt;br /&gt;
These ran tasks that took a long time to run. These were queued up, and run&lt;br /&gt;
one at a time in sequence.  These were typically things such things as payroll and  accounts receivable.&lt;br /&gt;
Usually these would be run overnight and the output, either on magnetic tape or a&lt;br /&gt;
stack of printouts which would be returned to the user in the morning.&lt;br /&gt;
&lt;br /&gt;
Preemptive multitasking - the OS enforces time sharing.&lt;br /&gt;
Co-operative multitasking  - each program lets others run.&lt;br /&gt;
&lt;br /&gt;
If you look at MS-DOS, there are batch files.  These are just a sequence of&lt;br /&gt;
commands to run.  It runs them and then returns when done.  &lt;br /&gt;
&lt;br /&gt;
If you want to run a GUI, however, a batch system is unlikely to be what you want, as&lt;br /&gt;
a GUI environment tends to be interactive. &lt;br /&gt;
&lt;br /&gt;
With the big iron in the old days they had big computers that would be sitting&lt;br /&gt;
mostly idle, except when running the batch jobs.  The idea of time sharing came along around then.&lt;br /&gt;
&lt;br /&gt;
=== Structure of a computer ===&lt;br /&gt;
&lt;br /&gt;
[[Image:Stored_program_architecture_1.png]]&lt;br /&gt;
Stored Program architecture&lt;br /&gt;
&lt;br /&gt;
The stored program architecture with today&#039;s computers is a bit of a fiction.&lt;br /&gt;
&lt;br /&gt;
The things the microprocessor does are significantly faster than the RAM&lt;br /&gt;
storage.  Modern computers have to wait for data from RAM. However this time is&lt;br /&gt;
dwarfed by the time spent waiting for I/O.  This is because I/O devices tend to be mechanical:&lt;br /&gt;
printers, hard disks, people at keyboards.&lt;br /&gt;
&lt;br /&gt;
This helped cause the idea &amp;quot;what if we had multiple users and let them share the CPU&amp;quot; to come&lt;br /&gt;
about. This is time-sharing. On modern computers, we do this too, but instead&lt;br /&gt;
of sharing with multiple users we run multiple programs for a single user --&lt;br /&gt;
multi-tasking.&lt;br /&gt;
&lt;br /&gt;
In older systems such as Win3.1 and MacOS 9, this was co-operative multi-tasking.&lt;br /&gt;
When things started running, they&#039;d hog the CPU until they decided they were&lt;br /&gt;
ready to give up the CPU.  &lt;br /&gt;
&lt;br /&gt;
There used to be a great feature in the Mac in the old days where if you held&lt;br /&gt;
down the mouse button, no networking would happen.  This was because the&lt;br /&gt;
program running at the time was hogging the CPU when the mouse button was pressed.&lt;br /&gt;
In pre-emptive multitasking, you get booted out periodically so that the&lt;br /&gt;
system can spend time paying attention to the network, to do animations, or&lt;br /&gt;
to let other applications run.  It spends a millisecond here, a millisecond there,&lt;br /&gt;
etc.  Instead of actually running simultaneously, they&#039;re periodically running, but &lt;br /&gt;
they seem to run simultaneously to the end-user.&lt;br /&gt;
&lt;br /&gt;
Sometimes you have 2 or more CPUs, but you have more than 2 things going on... &lt;br /&gt;
&lt;br /&gt;
=== Processes and the Kernel ===&lt;br /&gt;
&lt;br /&gt;
Processes are fundamentally the things that get multitasked and protected. It&lt;br /&gt;
is the abstraction of a running program. This is what makes an operating&lt;br /&gt;
system modern. In the old days, you had one memory space and the OS and its&lt;br /&gt;
applications were all sharing the CPU and memory. Now, with a process model,&lt;br /&gt;
there are barriers all over the place, and more importantly, something/someone&lt;br /&gt;
in charge governing the process. It&#039;s not a free-for all, it has a dictator,&lt;br /&gt;
and its name is the kernel.&lt;br /&gt;
&lt;br /&gt;
Kernel as in the center piece.&lt;br /&gt;
&lt;br /&gt;
Question to class: How many people have heard of the term Microkernel? &lt;br /&gt;
Not many hands.&lt;br /&gt;
&lt;br /&gt;
There are various terms that modify the term kernel such as monolithic kernel, microkernel,&lt;br /&gt;
picokernel, etc. These specify how much stuff is in the microkernel.&lt;br /&gt;
The idea is that the more code is in the kernel, the faster it goes, but&lt;br /&gt;
conversely, that the more code there is the risk of crashing is higher.&lt;br /&gt;
&lt;br /&gt;
All of the problem code goes into processes, as they can be restarted, and kept&lt;br /&gt;
out of the kernel.&lt;br /&gt;
&lt;br /&gt;
The debate about what is faster is not fully settled for technical and philosophical reasons.&lt;br /&gt;
Almost all operating systems on the list above are big kernels, not small ones.&lt;br /&gt;
&lt;br /&gt;
So if that&#039;s what a kernel is, how does a program fit into that?&lt;br /&gt;
If there&#039;s one program to rule them all, where do processes fit in?&lt;br /&gt;
The kernel decides who gets to run, that implements a priority scheme.&lt;br /&gt;
&lt;br /&gt;
Student:  &amp;quot;It got there first.  You start the computer, then the kernel gets in.  Everything has to&lt;br /&gt;
talk to it or it doesn&#039;t run...&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It gets to set the rules... that&#039;s sort of it...  &lt;br /&gt;
In unix, there&#039;s the idea of the init process.  It is first to run, and has&lt;br /&gt;
special responsibilities.  It is run using a regular binary, at system boot,&lt;br /&gt;
by the kernel.  This still doesn&#039;t tell us how the kernel keeps control of it.&lt;br /&gt;
&lt;br /&gt;
The kernel often keeps control by getting the hardware to help.  By loading first,&lt;br /&gt;
the kernel can setup the CPU and memory so that it has control. This&lt;br /&gt;
type of hardware assistance is generally available to the first code to&lt;br /&gt;
request it.&lt;br /&gt;
&lt;br /&gt;
=== Interrupts ===&lt;br /&gt;
&lt;br /&gt;
Interrupts -- what are they?  It&#039;s an alert to say something has to be done now.&lt;br /&gt;
&lt;br /&gt;
A CPU is running the programs, until something happens, like someone pressing&lt;br /&gt;
a key or a network packet arrives.  So an I/O device flags an interrupt.  The CPU&lt;br /&gt;
now has to stop and pay attention&lt;br /&gt;
&lt;br /&gt;
An interrupt is just a mechanism to allow the CPU to change contexts, to switch&lt;br /&gt;
from running one bit of code to another.  There&#039;s a standard set of interrupts&lt;br /&gt;
defined by the hardware.  Associated with each interrupt there&#039;s a bit of code.&lt;br /&gt;
When one interrupts happens, run its code, when another happens, run another.  &lt;br /&gt;
For example, for the keyboard, there&#039;s a routine to read a key from the keyboard, store it in&lt;br /&gt;
a buffer so its not overwritten when the next key is pressed, then returns.&lt;br /&gt;
&lt;br /&gt;
{|align=&amp;quot;right&amp;quot;&lt;br /&gt;
|[[Image:Stored_program_architecture_2.png]]&lt;br /&gt;
|}&lt;br /&gt;
Think of an interrupt as a little kid pulling at your pant leg.  It wants your attention now.&lt;br /&gt;
&lt;br /&gt;
The OS controls interrupts to control the CPU (and also what happens with RAM).&lt;br /&gt;
&lt;br /&gt;
Wait a second?  If the kernel can only control interrupts, how can it keep&lt;br /&gt;
general control if no interrupts happen?  The clock IO device!  It throws&lt;br /&gt;
interrupts too.&lt;br /&gt;
&lt;br /&gt;
As a part of the boot sequence, the kernel programs the clock to wake the&lt;br /&gt;
operating system up every, say, 100th of a second.  Call me!  So the OS can&lt;br /&gt;
then keep running and perform its tasks as it needs:&lt;br /&gt;
&amp;quot;Is everyone behaving nicely?  Do I need to kill anyone?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Virtual Memory ===&lt;br /&gt;
&lt;br /&gt;
A slight fly in the ointment: If you are a program, and want to take control,&lt;br /&gt;
how do you mount a rebellion?  You overwrite the interrupt table! This is&lt;br /&gt;
where protected memory comes in.  It prevents a regular program from doing this.&lt;br /&gt;
As a regular process, you often can&#039;t even see the interrupt table.&lt;br /&gt;
&lt;br /&gt;
How is that possible? Many schemes have been proposed for doing protected&lt;br /&gt;
memory.  Some variants will be spoken about, but the most widespread method &lt;br /&gt;
is something known as virtual memory.  Often tied into the concept of&lt;br /&gt;
virtual memory is the ability to use disk for memory too.&lt;br /&gt;
&lt;br /&gt;
The fundamental idea is that the address you think your instruction or variable is at in memory is&lt;br /&gt;
fictional/virtual.  Say you want to load from address 2000 and load it into a register, and you&lt;br /&gt;
have another program that want to do the same thing, are they doing the same thing?&lt;br /&gt;
Nope!  They have nothing to do with each other in a virtual memory model.&lt;br /&gt;
Both programs live in their own virtual worlds, and can&#039;t see each other.&lt;br /&gt;
The kernel, with the help of a little piece of hardware called the MMU &lt;br /&gt;
is able to give each process its own virtual view of memory.  It decides&lt;br /&gt;
how that&#039;s going to map to real memory as it sees it.&lt;br /&gt;
&lt;br /&gt;
So the kernel controls interrupts to control IO and it controls memory.&lt;br /&gt;
These are the two key controls.  If a kernel can&#039;t control these, it&lt;br /&gt;
can&#039;t properly provide protections (&amp;quot;It can&#039;t stop the rebellion&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Last class we talked about hypervisors.  The whole idea there is that the&lt;br /&gt;
kernel thinks that it has control of the interrupt table and the real&lt;br /&gt;
MMU are actually virtual ones, provided by the Hypervisor.&lt;br /&gt;
&lt;br /&gt;
So you can now run windows inside a window on Linux, OSX, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The difference between the various versions of windows:&lt;br /&gt;
&lt;br /&gt;
- Windows 95, Windows 98 implemented these ideas for some programs, but not all, and&lt;br /&gt;
could get around it easily.&lt;br /&gt;
- Windows 3.1 didn&#039;t have these.&lt;br /&gt;
- Windows XP, Vista are modern.&lt;br /&gt;
&lt;br /&gt;
There&#039;s one small problem with Vista and XP, however.  &lt;br /&gt;
This has to do with the nature of the processes:&lt;br /&gt;
&lt;br /&gt;
{|align=&amp;quot;right&amp;quot;&lt;br /&gt;
|[[Image:Processes_and_kernel.png]]&lt;br /&gt;
|}&lt;br /&gt;
To upgrade things, the kernel trusts some programs/users to allow them to&lt;br /&gt;
upgrade.  In windows, you tend to run as the user admin.  This means you&#039;re&lt;br /&gt;
running the equivalent of the unix root command.  The kernel listens to you&lt;br /&gt;
and does just about anything you want, including install programs. &lt;br /&gt;
&lt;br /&gt;
Say that cute Christmas animation.. which happens to install a keylogger&lt;br /&gt;
to send all your keystrokes to the other side of the world, so they can&lt;br /&gt;
log into your bank account.&lt;br /&gt;
&lt;br /&gt;
In unix, there&#039;s the concept of root users and non-root users.  Root can&lt;br /&gt;
ask for almost anything to be done, including change its code.  If you can&lt;br /&gt;
tell the kernel to load new code, you can pretty much do anything. As&lt;br /&gt;
an unpriviledged user, the kernel/OS say no.&lt;br /&gt;
&lt;br /&gt;
When people make fun of windows being insecure, its not a fundamental flaw&lt;br /&gt;
with the design of windows -- its a little broken, overly complex in some ways --&lt;br /&gt;
but certain design choices along the way in the name of usability, such as&lt;br /&gt;
running as admin users so that users don&#039;t need to be asked to do something&lt;br /&gt;
special to change settings, install software, upgrade, etc.&lt;br /&gt;
This is why we have the current spyware problem.&lt;br /&gt;
&lt;br /&gt;
Vista changes this slightly with the UAC (User Access Control) which runs&lt;br /&gt;
a regular user with full priviledges, but asks you whenever priviledged&lt;br /&gt;
operations need doing -- Yes/No.  And you just click on it.  But users&lt;br /&gt;
still click yes. &lt;br /&gt;
&lt;br /&gt;
And now there&#039;s easy ways to turn off UAC completely.  We&#039;ll talk about this more&lt;br /&gt;
later when we talk about security.&lt;br /&gt;
&lt;br /&gt;
=== System Calls ===&lt;br /&gt;
&lt;br /&gt;
How do you talk to a kernel?&lt;br /&gt;
&lt;br /&gt;
It&#039;s the dictator and you&#039;re a supplicant.  How do you make a timely request&lt;br /&gt;
to the kernel to ask it to please do something?  System calls!&lt;br /&gt;
&lt;br /&gt;
A system call is a standard mechanism for an application to talk to a kernel.&lt;br /&gt;
&lt;br /&gt;
A system call is NOT a function call.  In your APIs and the like, it may look like&lt;br /&gt;
a function call, may be wrapped in one.. But in implementation, they are very&lt;br /&gt;
different.&lt;br /&gt;
&lt;br /&gt;
In order for the kernel to be in control, it has to run with special&lt;br /&gt;
privileges and not give these to the user programs. There are various schemes,&lt;br /&gt;
but the common one is a 1-bit option: User mode, or supervisor mode. User mode&lt;br /&gt;
means that running as a regular program, you can&#039;t talk to the IO/interrupt&lt;br /&gt;
vectors or talk tot he MMU, but you can run instructions and access your own&lt;br /&gt;
memory. When you switch to supervisor mode, then everything is accessible. The&lt;br /&gt;
kernel runs in supervisor mode.&lt;br /&gt;
&lt;br /&gt;
So if you&#039;re cut off and can&#039;t see the kernel, how do you send it a message?&lt;br /&gt;
You might be able to write to a special place in memory that the kernel might&lt;br /&gt;
check periodically, but how do you get it to check now?  Normally the kernel&lt;br /&gt;
is invoked by interrupts.. So as a user program, to invoke the kernel,&lt;br /&gt;
you call an interrupt.  There are special instructions, software interrupts,&lt;br /&gt;
that are like a hardware interrupt, but software initiates them.  There are&lt;br /&gt;
interrupt tables just like for hardware.&lt;br /&gt;
&lt;br /&gt;
So the kernel can then look at the memory of the invoking user program when a&lt;br /&gt;
user program calls the system call. Remember, because of the memory&lt;br /&gt;
protections, you can&#039;t just jump into kernel code, so the only way in is via an&lt;br /&gt;
interrupt.&lt;br /&gt;
&lt;br /&gt;
Therefore, system calls cause interrupts to invoke the kernel.&lt;br /&gt;
&lt;br /&gt;
In the process of doing a system call, the system has to do a lot of &#039;paperwork&#039; &lt;br /&gt;
to change context.  System calls are expensive, very expensive.  This is&lt;br /&gt;
one of the things that tends to bound the performance of an operating system.&lt;br /&gt;
&lt;br /&gt;
Modern CPUs are so fast, shouldn&#039;t they be able to switch really fast?&lt;br /&gt;
Turns out the tricks used to make modern CPUs really fast are like those&lt;br /&gt;
used to make muscle cars -- they tend to go really fast in a straight line,&lt;br /&gt;
but when you want to turn, you have to slow down to nearly a stop.  Modern&lt;br /&gt;
CPUs are like that.&lt;br /&gt;
&lt;br /&gt;
Interrupts cause all partial work done in parallel by modern CPUs to be&lt;br /&gt;
thrown out, such as 10-20 or more instructions. The CPU has to fill the&lt;br /&gt;
pipelines and resume. This stuff happens at a level below that of what the&lt;br /&gt;
kernel runs at.  The kernel saves its registers before switching context,&lt;br /&gt;
so that it can resume later.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Introduction&amp;diff=1441</id>
		<title>Introduction</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Introduction&amp;diff=1441"/>
		<updated>2007-09-16T18:13:48Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
[[Class Outline#Introduction|Last class]], we began talking about turning the machine we have into the machine we want.&lt;br /&gt;
&lt;br /&gt;
What are some properties of the machine that we want?&lt;br /&gt;
&lt;br /&gt;
- usable / accessible&lt;br /&gt;
- stable / reliable&lt;br /&gt;
- functional (access to underlying resources)&lt;br /&gt;
- efficient&lt;br /&gt;
- customizable&lt;br /&gt;
- secure&lt;br /&gt;
- multitasking&lt;br /&gt;
- portability&lt;br /&gt;
&lt;br /&gt;
If you have a computer that doesn&#039;t let you access the hardware, eg. a&lt;br /&gt;
keyboard or video camera, it isn&#039;t very functional. Multitasking is slightly different than efficiency. &lt;br /&gt;
For portability, do you really want to have to rewrite applications to support&lt;br /&gt;
slight variations in the hardware such as different size hard disks and different&lt;br /&gt;
amounts of RAM?&lt;br /&gt;
&lt;br /&gt;
Operating systems don&#039;t do all of these perfectly, but they tend to do a lot&lt;br /&gt;
of these at least acceptably.&lt;br /&gt;
&lt;br /&gt;
If you look at the introduction to the text book, it talks about various&lt;br /&gt;
types of operatings systems.  Some of the operating systems we know about are:&lt;br /&gt;
Linux, Windows, MacOSX, VXWorks, QNX, MS-DOS, Soliaris/xBSD, OS/2, BeOS, VMS,&lt;br /&gt;
MVS, OS/370, AIX, etc.&lt;br /&gt;
&lt;br /&gt;
Linux isn&#039;t a variety of different operating systems to the same degree as the &lt;br /&gt;
different versions of windows, as most linuxes share some components, whereas windows tends not to.&lt;br /&gt;
&lt;br /&gt;
Of the list above, most of them are modern operating systems, except MS-DOS.&lt;br /&gt;
To be a &amp;quot;modern&amp;quot; OS, there are two major qualities:  Does it have protected&lt;br /&gt;
memory, and does it have pre-emptive multitasking?&lt;br /&gt;
&lt;br /&gt;
=== Protected Memory ===&lt;br /&gt;
&lt;br /&gt;
What is protected memory?  &lt;br /&gt;
&lt;br /&gt;
Student: A situation where each program and the operating system has its own memory, and the OS prevents&lt;br /&gt;
other programs from writing to another program&#039;s memory.  &lt;br /&gt;
&lt;br /&gt;
Dr. Somayaji:  Access mechanisms to avoid having one program overwrite another program&#039;s memory.&lt;br /&gt;
&lt;br /&gt;
This lets you have a situation where if one program crashes, you can just restart it.  Damage due to&lt;br /&gt;
memory overwrites is limited to one program.&lt;br /&gt;
&lt;br /&gt;
=== Preemptive Multi-tasking ===&lt;br /&gt;
&lt;br /&gt;
A way to have more than one program run at a time. Older machines were known&lt;br /&gt;
as batch machines and their operating systems were batch operating systems.&lt;br /&gt;
These ran tasks that took a long time to run. These were queued up, and run&lt;br /&gt;
one at a time in sequence.  These were typically things such things as payroll and  accounts receivable.&lt;br /&gt;
Usually these would be run overnight and the output, either on magnetic tape or a&lt;br /&gt;
stack of printouts which would be returned to the user in the morning.&lt;br /&gt;
&lt;br /&gt;
Preemptive multitasking - the OS enforces time sharing.&lt;br /&gt;
Co-operative multitasking  - each program lets others run.&lt;br /&gt;
&lt;br /&gt;
If you look at MS-DOS, there are batch files.  These are just a sequence of&lt;br /&gt;
commands to run.  It runs them and then returns when done.  &lt;br /&gt;
&lt;br /&gt;
If you want to run a GUI, however, a batch system is unlikely to be what you want, as&lt;br /&gt;
a GUI environment tends to be interactive. &lt;br /&gt;
&lt;br /&gt;
With the big iron in the old days they had big computers that would be sitting&lt;br /&gt;
mostly idle, except when running the batch jobs.  The idea of time sharing came along around then.&lt;br /&gt;
&lt;br /&gt;
=== Structure of a computer ===&lt;br /&gt;
&lt;br /&gt;
[[Image:Stored_program_architecture_1.png]]&lt;br /&gt;
Stored Program architecture&lt;br /&gt;
&lt;br /&gt;
The stored program architecture with today&#039;s computers is a bit of a fiction.&lt;br /&gt;
&lt;br /&gt;
The things the microprocessor does are significantly faster than the RAM&lt;br /&gt;
storage.  Modern computers have to wait for data from RAM. However this time is&lt;br /&gt;
dwarfed by the time spent waiting for I/O.  This is because I/O devices tend to be mechanical:&lt;br /&gt;
printers, hard disks, people at keyboards.&lt;br /&gt;
&lt;br /&gt;
This helped cause the idea &amp;quot;what if we had multiple users and let them share the CPU&amp;quot; to come&lt;br /&gt;
about. This is time-sharing. On modern computers, we do this too, but instead&lt;br /&gt;
of sharing with multiple users we run multiple programs for a single user --&lt;br /&gt;
multi-tasking.&lt;br /&gt;
&lt;br /&gt;
In older systems such as Win3.1 and MacOS 9, this was co-operative multi-tasking.&lt;br /&gt;
When things started running, they&#039;d hog the CPU until they decided they were&lt;br /&gt;
ready to give up the CPU.  &lt;br /&gt;
&lt;br /&gt;
There used to be a great feature in the Mac in the old days where if you held&lt;br /&gt;
down the mouse button, no networking would happen.  This was because the&lt;br /&gt;
program running at the time was hogging the CPU when the mouse button was pressed.&lt;br /&gt;
In pre-emptive multitasking, you get booted out periodically so that the&lt;br /&gt;
system can spend time paying attention to the network, to do animations, or&lt;br /&gt;
to let other applications run.  It spends a millisecond here, a millisecond there,&lt;br /&gt;
etc.  Instead of actually running simultaneously, they&#039;re periodically running, but &lt;br /&gt;
they seem to run simultaneously to the end-user.&lt;br /&gt;
&lt;br /&gt;
Sometimes you have 2 or more CPUs, but you have more than 2 things going on... &lt;br /&gt;
&lt;br /&gt;
=== Processes and the Kernel ===&lt;br /&gt;
&lt;br /&gt;
Processes are fundamentally the things that get multitasked and protected. It&lt;br /&gt;
is the abstraction of a running program. This is what makes an operating&lt;br /&gt;
system modern. In the old days, you had one memory space and the OS and its&lt;br /&gt;
applications were all sharing the CPU and memory. Now, with a process model,&lt;br /&gt;
there are barriers all over the place, and more importantly, something/someone&lt;br /&gt;
in charge governing the process. It&#039;s not a free-for all, it has a dictator,&lt;br /&gt;
and its name is the kernel.&lt;br /&gt;
&lt;br /&gt;
Kernel as in the center piece.&lt;br /&gt;
&lt;br /&gt;
Question to class: How many people have heard of the term Microkernel? &lt;br /&gt;
Not many hands.&lt;br /&gt;
&lt;br /&gt;
There are various terms that modify the term kernel such as monolithic kernel, microkernel,&lt;br /&gt;
picokernel, etc. These specify how much stuff is in the microkernel.&lt;br /&gt;
The idea is that the more code is in the kernel, the faster it goes, but&lt;br /&gt;
conversely, that the more code there is the risk of crashing is higher.&lt;br /&gt;
&lt;br /&gt;
All of the problem code goes into processes, as they can be restarted, and kept&lt;br /&gt;
out of the kernel.&lt;br /&gt;
&lt;br /&gt;
The debate about what is faster is not fully settled for technical and philosophical reasons.&lt;br /&gt;
Almost all operating systems on the list above are big kernels, not small ones.&lt;br /&gt;
&lt;br /&gt;
So if that&#039;s what a kernel is, how does a program fit into that?&lt;br /&gt;
If there&#039;s one program to rule them all, where do processes fit in?&lt;br /&gt;
The kernel decides who gets to run, that implements a priority scheme.&lt;br /&gt;
&lt;br /&gt;
Student:  &amp;quot;It got there first.  You start the computer, then the kernel gets in.  Everything has to&lt;br /&gt;
talk to it or it doesn&#039;t run...&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It gets to set the rules... that&#039;s sort of it...  &lt;br /&gt;
In unix, there&#039;s the idea of the init process.  It is first to run, and has&lt;br /&gt;
special responsibilities.  It is run using a regular binary, at system boot,&lt;br /&gt;
by the kernel.  This still doesn&#039;t tell us how the kernel keeps control of it.&lt;br /&gt;
&lt;br /&gt;
The kernel often keeps control by getting the hardware to help.  By loading first,&lt;br /&gt;
the kernel can setup the CPU and memory so that it has control. This&lt;br /&gt;
type of hardware assistance is generally available to the first code to&lt;br /&gt;
request it.&lt;br /&gt;
&lt;br /&gt;
=== Interrupts ===&lt;br /&gt;
&lt;br /&gt;
Interrupts -- what are they?  It&#039;s an alert to say something has to be done now.&lt;br /&gt;
&lt;br /&gt;
A CPU is running the programs, until something happens, like someone pressing&lt;br /&gt;
a key or a network packet arrives.  So an I/O device flags an interrupt.  The CPU&lt;br /&gt;
now has to stop and pay attention&lt;br /&gt;
&lt;br /&gt;
An interrupt is just a mechanism to allow the CPU to change contexts, to switch&lt;br /&gt;
from running one bit of code to another.  There&#039;s a standard set of interrupts&lt;br /&gt;
defined by the hardware.  Associated with each interrupt there&#039;s a bit of code.&lt;br /&gt;
When one interrupts happens, run its code, when another happens, run another.  &lt;br /&gt;
For example, for the keyboard, there&#039;s a routine to read a key from the keyboard, store it in&lt;br /&gt;
a buffer so its not overwritten when the next key is pressed, then returns.&lt;br /&gt;
&lt;br /&gt;
Think of an interrupt as a little kid pulling at your pant leg.  It wants your attention now.&lt;br /&gt;
&lt;br /&gt;
The OS controls interrupts to control the CPU (and also what happens with RAM).&lt;br /&gt;
&lt;br /&gt;
Wait a second?  If the kernel can only control interrupts, how can it keep&lt;br /&gt;
general control if no interrupts happen?  The clock IO device!  It throws&lt;br /&gt;
interrupts too.&lt;br /&gt;
&lt;br /&gt;
As a part of the boot sequence, the kernel programs the clock to wake the&lt;br /&gt;
operating system up every, say, 100th of a second.  Call me!  So the OS can&lt;br /&gt;
then keep running and perform its tasks as it needs:&lt;br /&gt;
&amp;quot;Is everyone behaving nicely?  Do I need to kill anyone?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[[Image:Stored_program_architecture_2.png]]&lt;br /&gt;
&lt;br /&gt;
=== Virtual Memory ===&lt;br /&gt;
&lt;br /&gt;
A slight fly in the ointment: If you are a program, and want to take control,&lt;br /&gt;
how do you mount a rebellion?  You overwrite the interrupt table! This is&lt;br /&gt;
where protected memory comes in.  It prevents a regular program from doing this.&lt;br /&gt;
As a regular process, you often can&#039;t even see the interrupt table.&lt;br /&gt;
&lt;br /&gt;
How is that possible? Many schemes have been proposed for doing protected&lt;br /&gt;
memory.  Some variants will be spoken about, but the most widespread method &lt;br /&gt;
is something known as virtual memory.  Often tied into the concept of&lt;br /&gt;
virtual memory is the ability to use disk for memory too.&lt;br /&gt;
&lt;br /&gt;
The fundamental idea is that the address you think your instruction or variable is at in memory is&lt;br /&gt;
fictional/virtual.  Say you want to load from address 2000 and load it into a register, and you&lt;br /&gt;
have another program that want to do the same thing, are they doing the same thing?&lt;br /&gt;
Nope!  They have nothing to do with each other in a virtual memory model.&lt;br /&gt;
Both programs live in their own virtual worlds, and can&#039;t see each other.&lt;br /&gt;
The kernel, with the help of a little piece of hardware called the MMU &lt;br /&gt;
is able to give each process its own virtual view of memory.  It decides&lt;br /&gt;
how that&#039;s going to map to real memory as it sees it.&lt;br /&gt;
&lt;br /&gt;
So the kernel controls interrupts to control IO and it controls memory.&lt;br /&gt;
These are the two key controls.  If a kernel can&#039;t control these, it&lt;br /&gt;
can&#039;t properly provide protections (&amp;quot;It can&#039;t stop the rebellion&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Last class we talked about hypervisors.  The whole idea there is that the&lt;br /&gt;
kernel thinks that it has control of the interrupt table and the real&lt;br /&gt;
MMU are actually virtual ones, provided by the Hypervisor.&lt;br /&gt;
&lt;br /&gt;
So you can now run windows inside a window on Linux, OSX, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The difference between the various versions of windows:&lt;br /&gt;
&lt;br /&gt;
- Windows 95, Windows 98 implemented these ideas for some programs, but not all, and&lt;br /&gt;
could get around it easily.&lt;br /&gt;
- Windows 3.1 didn&#039;t have these.&lt;br /&gt;
- Windows XP, Vista are modern.&lt;br /&gt;
&lt;br /&gt;
There&#039;s one small problem with Vista and XP, however.  &lt;br /&gt;
This has to do with the nature of the processes:&lt;br /&gt;
&lt;br /&gt;
[[Image:Processes_and_kernel.png]]&lt;br /&gt;
&lt;br /&gt;
To upgrade things, the kernel trusts some programs/users to allow them to&lt;br /&gt;
upgrade.  In windows, you tend to run as the user admin.  This means you&#039;re&lt;br /&gt;
running the equivalent of the unix root command.  The kernel listens to you&lt;br /&gt;
and does just about anything you want, including install programs. &lt;br /&gt;
&lt;br /&gt;
Say that cute Christmas animation.. which happens to install a keylogger&lt;br /&gt;
to send all your keystrokes to the other side of the world, so they can&lt;br /&gt;
log into your bank account.&lt;br /&gt;
&lt;br /&gt;
In unix, there&#039;s the concept of root users and non-root users.  Root can&lt;br /&gt;
ask for almost anything to be done, including change its code.  If you can&lt;br /&gt;
tell the kernel to load new code, you can pretty much do anything. As&lt;br /&gt;
an unpriviledged user, the kernel/OS say no.&lt;br /&gt;
&lt;br /&gt;
When people make fun of windows being insecure, its not a fundamental flaw&lt;br /&gt;
with the design of windows -- its a little broken, overly complex in some ways --&lt;br /&gt;
but certain design choices along the way in the name of usability, such as&lt;br /&gt;
running as admin users so that users don&#039;t need to be asked to do something&lt;br /&gt;
special to change settings, install software, upgrade, etc.&lt;br /&gt;
This is why we have the current spyware problem.&lt;br /&gt;
&lt;br /&gt;
Vista changes this slightly with the UAC (User Access Control) which runs&lt;br /&gt;
a regular user with full priviledges, but asks you whenever priviledged&lt;br /&gt;
operations need doing -- Yes/No.  And you just click on it.  But users&lt;br /&gt;
still click yes. &lt;br /&gt;
&lt;br /&gt;
And now there&#039;s easy ways to turn off UAC completely.  We&#039;ll talk about this more&lt;br /&gt;
later when we talk about security.&lt;br /&gt;
&lt;br /&gt;
=== System Calls ===&lt;br /&gt;
&lt;br /&gt;
How do you talk to a kernel?&lt;br /&gt;
&lt;br /&gt;
It&#039;s the dictator and you&#039;re a supplicant.  How do you make a timely request&lt;br /&gt;
to the kernel to ask it to please do something?  System calls!&lt;br /&gt;
&lt;br /&gt;
A system call is a standard mechanism for an application to talk to a kernel.&lt;br /&gt;
&lt;br /&gt;
A system call is NOT a function call.  In your APIs and the like, it may look like&lt;br /&gt;
a function call, may be wrapped in one.. But in implementation, they are very&lt;br /&gt;
different.&lt;br /&gt;
&lt;br /&gt;
In order for the kernel to be in control, it has to run with special&lt;br /&gt;
privileges and not give these to the user programs. There are various schemes,&lt;br /&gt;
but the common one is a 1-bit option: User mode, or supervisor mode. User mode&lt;br /&gt;
means that running as a regular program, you can&#039;t talk to the IO/interrupt&lt;br /&gt;
vectors or talk tot he MMU, but you can run instructions and access your own&lt;br /&gt;
memory. When you switch to supervisor mode, then everything is accessible. The&lt;br /&gt;
kernel runs in supervisor mode.&lt;br /&gt;
&lt;br /&gt;
So if you&#039;re cut off and can&#039;t see the kernel, how do you send it a message?&lt;br /&gt;
You might be able to write to a special place in memory that the kernel might&lt;br /&gt;
check periodically, but how do you get it to check now?  Normally the kernel&lt;br /&gt;
is invoked by interrupts.. So as a user program, to invoke the kernel,&lt;br /&gt;
you call an interrupt.  There are special instructions, software interrupts,&lt;br /&gt;
that are like a hardware interrupt, but software initiates them.  There are&lt;br /&gt;
interrupt tables just like for hardware.&lt;br /&gt;
&lt;br /&gt;
So the kernel can then look at the memory of the invoking user program when a&lt;br /&gt;
user program calls the system call. Remember, because of the memory&lt;br /&gt;
protections, you can&#039;t just jump into kernel code, so the only way in is via an&lt;br /&gt;
interrupt.&lt;br /&gt;
&lt;br /&gt;
Therefore, system calls cause interrupts to invoke the kernel.&lt;br /&gt;
&lt;br /&gt;
In the process of doing a system call, the system has to do a lot of &#039;paperwork&#039; &lt;br /&gt;
to change context.  System calls are expensive, very expensive.  This is&lt;br /&gt;
one of the things that tends to bound the performance of an operating system.&lt;br /&gt;
&lt;br /&gt;
Modern CPUs are so fast, shouldn&#039;t they be able to switch really fast?&lt;br /&gt;
Turns out the tricks used to make modern CPUs really fast are like those&lt;br /&gt;
used to make muscle cars -- they tend to go really fast in a straight line,&lt;br /&gt;
but when you want to turn, you have to slow down to nearly a stop.  Modern&lt;br /&gt;
CPUs are like that.&lt;br /&gt;
&lt;br /&gt;
Interrupts cause all partial work done in parallel by modern CPUs to be&lt;br /&gt;
thrown out, such as 10-20 or more instructions. The CPU has to fill the&lt;br /&gt;
pipelines and resume. This stuff happens at a level below that of what the&lt;br /&gt;
kernel runs at.  The kernel saves its registers before switching context,&lt;br /&gt;
so that it can resume later.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Introduction&amp;diff=1440</id>
		<title>Introduction</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Introduction&amp;diff=1440"/>
		<updated>2007-09-16T18:09:57Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
[[Class Outline#Introduction|Last class]], we began talking about turning the machine we have into the machine we want.&lt;br /&gt;
&lt;br /&gt;
What are some properties of the machine that we want?&lt;br /&gt;
&lt;br /&gt;
- usable / accessible&lt;br /&gt;
- stable / reliable&lt;br /&gt;
- functional (access to underlying resources)&lt;br /&gt;
- efficient&lt;br /&gt;
- customizable&lt;br /&gt;
- secure&lt;br /&gt;
- multitasking&lt;br /&gt;
- portability&lt;br /&gt;
&lt;br /&gt;
If you have a computer that doesn&#039;t let you access the hardware, eg. a&lt;br /&gt;
keyboard or video camera, it isn&#039;t very functional. Multitasking is slightly different than efficiency. &lt;br /&gt;
For portability, do you really want to have to rewrite applications to support&lt;br /&gt;
slight variations in the hardware such as different size hard disks and different&lt;br /&gt;
amounts of RAM?&lt;br /&gt;
&lt;br /&gt;
Operating systems don&#039;t do all of these perfectly, but they tend to do a lot&lt;br /&gt;
of these at least acceptably.&lt;br /&gt;
&lt;br /&gt;
If you look at the introduction to the text book, it talks about various&lt;br /&gt;
types of operatings systems.  Some of the operating systems we know about are:&lt;br /&gt;
Linux, Windows, MacOSX, VXWorks, QNX, MS-DOS, Soliaris/xBSD, OS/2, BeOS, VMS,&lt;br /&gt;
MVS, OS/370, AIX, etc.&lt;br /&gt;
&lt;br /&gt;
Linux isn&#039;t a variety of different operating systems to the same degree as the &lt;br /&gt;
different versions of windows, as most linuxes share some components, whereas windows tends not to.&lt;br /&gt;
&lt;br /&gt;
Of the list above, most of them are modern operating systems, except MS-DOS.&lt;br /&gt;
To be a &amp;quot;modern&amp;quot; OS, there are two major qualities:  Does it have protected&lt;br /&gt;
memory, and does it have pre-emptive multitasking?&lt;br /&gt;
&lt;br /&gt;
=== Protected Memory ===&lt;br /&gt;
&lt;br /&gt;
What is protected memory?  &lt;br /&gt;
&lt;br /&gt;
Student: A situation where each program and the operating system has its own memory, and the OS prevents&lt;br /&gt;
other programs from writing to another program&#039;s memory.  &lt;br /&gt;
&lt;br /&gt;
Dr. Somayaji:  Access mechanisms to avoid having one program overwrite another program&#039;s memory.&lt;br /&gt;
&lt;br /&gt;
This lets you have a situation where if one program crashes, you can just restart it.  Damage due to&lt;br /&gt;
memory overwrites is limited to one program.&lt;br /&gt;
&lt;br /&gt;
=== Preemptive Multi-tasking ===&lt;br /&gt;
&lt;br /&gt;
A way to have more than one program run at a time. Older machines were known&lt;br /&gt;
as batch machines and their operating systems were batch operating systems.&lt;br /&gt;
These ran tasks that took a long time to run. These were queued up, and run&lt;br /&gt;
one at a time in sequence.  These were typically things such things as payroll and  accounts receivable.&lt;br /&gt;
Usually these would be run overnight and the output, either on magnetic tape or a&lt;br /&gt;
stack of printouts which would be returned to the user in the morning.&lt;br /&gt;
&lt;br /&gt;
Preemptive multitasking - the OS enforces time sharing.&lt;br /&gt;
Co-operative multitasking  - each program lets others run.&lt;br /&gt;
&lt;br /&gt;
If you look at MS-DOS, there are batch files.  These are just a sequence of&lt;br /&gt;
commands to run.  It runs them and then returns when done.  &lt;br /&gt;
&lt;br /&gt;
If you want to run a GUI, however, a batch system is unlikely to be what you want, as&lt;br /&gt;
a GUI environment tends to be interactive. &lt;br /&gt;
&lt;br /&gt;
With the big iron in the old days they had big computers that would be sitting&lt;br /&gt;
mostly idle, except when running the batch jobs.  The idea of time sharing came along around then.&lt;br /&gt;
&lt;br /&gt;
=== Structure of a computer ===&lt;br /&gt;
&lt;br /&gt;
[[Image:Stored_program_architecture_1.png]]&lt;br /&gt;
Stored Program architecture&lt;br /&gt;
&lt;br /&gt;
The stored program architecture with today&#039;s computers is a bit of a fiction.&lt;br /&gt;
&lt;br /&gt;
The things the microprocessor does are significantly faster than the RAM&lt;br /&gt;
storage.  Modern computers have to wait for data from RAM. However this time is&lt;br /&gt;
dwarfed by the time spent waiting for I/O.  This is because I/O devices tend to be mechanical:&lt;br /&gt;
printers, hard disks, people at keyboards.&lt;br /&gt;
&lt;br /&gt;
This helped cause the idea &amp;quot;what if we had multiple users and let them share the CPU&amp;quot; to come&lt;br /&gt;
about. This is time-sharing. On modern computers, we do this too, but instead&lt;br /&gt;
of sharing with multiple users we run multiple programs for a single user --&lt;br /&gt;
multi-tasking.&lt;br /&gt;
&lt;br /&gt;
In older systems such as Win3.1 and MacOS 9, this was co-operative multi-tasking.&lt;br /&gt;
When things started running, they&#039;d hog the CPU until they decided they were&lt;br /&gt;
ready to give up the CPU.  &lt;br /&gt;
&lt;br /&gt;
There used to be a great feature in the Mac in the old days where if you held&lt;br /&gt;
down the mouse button, no networking would happen.  This was because the&lt;br /&gt;
program running at the time was hogging the CPU when the mouse button was pressed.&lt;br /&gt;
In pre-emptive multitasking, you get booted out periodically so that the&lt;br /&gt;
system can spend time paying attention to the network, to do animations, or&lt;br /&gt;
to let other applications run.  It spends a millisecond here, a millisecond there,&lt;br /&gt;
etc.  Instead of actually running simultaneously, they&#039;re periodically running, but &lt;br /&gt;
they seem to run simultaneously to the end-user.&lt;br /&gt;
&lt;br /&gt;
Sometimes you have 2 or more CPUs, but you have more than 2 things going on... &lt;br /&gt;
&lt;br /&gt;
=== Processes and the Kernel ===&lt;br /&gt;
&lt;br /&gt;
Processes are fundamentally the things that get multitasked and protected. It&lt;br /&gt;
is the abstraction of a running program. This is what makes an operating&lt;br /&gt;
system modern. In the old days, you had one memory space and the OS and its&lt;br /&gt;
applications were all sharing the CPU and memory. Now, with a process model,&lt;br /&gt;
there are barriers all over the place, and more importantly, something/someone&lt;br /&gt;
in charge governing the process. It&#039;s not a free-for all, it has a dictator,&lt;br /&gt;
and its name is the kernel.&lt;br /&gt;
&lt;br /&gt;
Kernel as in the center piece.&lt;br /&gt;
&lt;br /&gt;
Question to class: How many people have heard of the term Microkernel? &lt;br /&gt;
Not many hands.&lt;br /&gt;
&lt;br /&gt;
There are various terms that modify the term kernel such as monolithic kernel, microkernel,&lt;br /&gt;
picokernel, etc. These specify how much stuff is in the microkernel.&lt;br /&gt;
The idea is that the more code is in the kernel, the faster it goes, but&lt;br /&gt;
conversely, that the more code there is the risk of crashing is higher.&lt;br /&gt;
&lt;br /&gt;
All of the problem code goes into processes, as they can be restarted, and kept&lt;br /&gt;
out of the kernel.&lt;br /&gt;
&lt;br /&gt;
The debate about what is faster is not fully settled for technical and philosophical reasons.&lt;br /&gt;
Almost all operating systems on the list above are big kernels, not small ones.&lt;br /&gt;
&lt;br /&gt;
So if that&#039;s what a kernel is, how does a program fit into that?&lt;br /&gt;
If there&#039;s one program to rule them all, where do processes fit in?&lt;br /&gt;
The kernel decides who gets to run, that implements a priority scheme.&lt;br /&gt;
&lt;br /&gt;
Student:  &amp;quot;It got there first.  You start the computer, then the kernel gets in.  Everything has to&lt;br /&gt;
talk to it or it doesn&#039;t run...&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It gets to set the rules... that&#039;s sort of it...  &lt;br /&gt;
In unix, there&#039;s the idea of the init process.  It is first to run, and has&lt;br /&gt;
special responsibilities.  It is run using a regular binary, at system boot,&lt;br /&gt;
by the kernel.  This still doesn&#039;t tell us how the kernel keeps control of it.&lt;br /&gt;
&lt;br /&gt;
The kernel often keeps control by getting the hardware to help.  By loading first,&lt;br /&gt;
the kernel can setup the CPU and memory so that it has control. This&lt;br /&gt;
type of hardware assistance is generally available to the first code to&lt;br /&gt;
request it.&lt;br /&gt;
&lt;br /&gt;
=== Interrupts ===&lt;br /&gt;
&lt;br /&gt;
Interrupts -- what are they?  It&#039;s an alert to say something has to be done now.&lt;br /&gt;
&lt;br /&gt;
A CPU is running the programs, until something happens, like someone pressing&lt;br /&gt;
a key or a network packet arrives.  So an I/O device flags an interrupt.  The CPU&lt;br /&gt;
now has to stop and pay attention&lt;br /&gt;
&lt;br /&gt;
An interrupt is just a mechanism to allow the CPU to change contexts, to switch&lt;br /&gt;
from running one bit of code to another.  There&#039;s a standard set of interrupts&lt;br /&gt;
defined by the hardware.  Associated with each interrupt there&#039;s a bit of code.&lt;br /&gt;
When one interrupts happens, run its code, when another happens, run another.  &lt;br /&gt;
For example, for the keyboard, there&#039;s a routine to read a key from the keyboard, store it in&lt;br /&gt;
a buffer so its not overwritten when the next key is pressed, then returns.&lt;br /&gt;
&lt;br /&gt;
Think of an interrupt as a little kid pulling at your pant leg.  It wants your attention now.&lt;br /&gt;
&lt;br /&gt;
The OS controls interrupts to control the CPU (and also what happens with RAM).&lt;br /&gt;
&lt;br /&gt;
Wait a second?  If the kernel can only control interrupts, how can it keep&lt;br /&gt;
general control if no interrupts happen?  The clock IO device!  It throws&lt;br /&gt;
interrupts too.&lt;br /&gt;
&lt;br /&gt;
As a part of the boot sequence, the kernel programs the clock to wake the&lt;br /&gt;
operating system up every, say, 100th of a second.  Call me!  So the OS can&lt;br /&gt;
then keep running and perform its tasks as it needs:&lt;br /&gt;
&amp;quot;Is everyone behaving nicely?  Do I need to kill anyone?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[[Image:Stored_program_architecture_2.png]]&lt;br /&gt;
&lt;br /&gt;
=== Virtual Memory ===&lt;br /&gt;
&lt;br /&gt;
A slight fly in the ointment: If you are a program, and want to take control,&lt;br /&gt;
how do you mount a rebellion?  You overwrite the interrupt table! This is&lt;br /&gt;
where protected memory comes in.  It prevents a regular program from doing this.&lt;br /&gt;
As a regular process, you often can&#039;t even see the interrupt table.&lt;br /&gt;
&lt;br /&gt;
How is that possible? Many schemes have been proposed for doing protected&lt;br /&gt;
memory.  Some variants will be spoken about, but the most widespread method &lt;br /&gt;
is something known as virtual memory.  Often tied into the concept of&lt;br /&gt;
virtual memory is the ability to use disk for memory too.&lt;br /&gt;
&lt;br /&gt;
The fundamental idea is that the address you think your instruction or variable is at in memory is&lt;br /&gt;
fictional/virtual.  Say you want to load from address 2000 and load it into a register, and you&lt;br /&gt;
have another program that want to do the same thing, are they doing the same thing?&lt;br /&gt;
Nope!  They have nothing to do with each other in a virtual memory model.&lt;br /&gt;
Both programs live in their own virtual worlds, and can&#039;t see each other.&lt;br /&gt;
The kernel, with the help of a little piece of hardware called the MMU &lt;br /&gt;
is able to give each process its own virtual view of memory.  It decides&lt;br /&gt;
how that&#039;s going to map to real memory as it sees it.&lt;br /&gt;
&lt;br /&gt;
So the kernel controls interrupts to control IO and it controls memory.&lt;br /&gt;
These are the two key controls.  If a kernel can&#039;t control these, it&lt;br /&gt;
can&#039;t properly provide protections (&amp;quot;It can&#039;t stop the rebellion&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Last class we talked about hypervisors.  The whole idea there is that the&lt;br /&gt;
kernel thinks that it has control of the interrupt table and the real&lt;br /&gt;
MMU are actually virtual ones, provided by the Hypervisor.&lt;br /&gt;
&lt;br /&gt;
So you can now run windows inside a window on Linux, OSX, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The difference between the various versions of windows:&lt;br /&gt;
&lt;br /&gt;
- Windows 95, Windows 98 implemented these ideas for some programs, but not all, and&lt;br /&gt;
could get around it easily.&lt;br /&gt;
- Windows 3.1 didn&#039;t have these.&lt;br /&gt;
- Windows XP, Vista are modern.&lt;br /&gt;
&lt;br /&gt;
There&#039;s one small problem with Vista and XP, however.  &lt;br /&gt;
This has to do with the nature of the processes:&lt;br /&gt;
&lt;br /&gt;
[[Image:Processes_and_kernel.png‎]]&lt;br /&gt;
&lt;br /&gt;
To upgrade things, the kernel trusts some programs/users to allow them to&lt;br /&gt;
upgrade.  In windows, you tend to run as the user admin.  This means you&#039;re&lt;br /&gt;
running the equivalent of the unix root command.  The kernel listens to you&lt;br /&gt;
and does just about anything you want, including install programs. &lt;br /&gt;
&lt;br /&gt;
Say that cute Christmas animation.. which happens to install a keylogger&lt;br /&gt;
to send all your keystrokes to the other side of the world, so they can&lt;br /&gt;
log into your bank account.&lt;br /&gt;
&lt;br /&gt;
In unix, there&#039;s the concept of root users and non-root users.  Root can&lt;br /&gt;
ask for almost anything to be done, including change its code.  If you can&lt;br /&gt;
tell the kernel to load new code, you can pretty much do anything. As&lt;br /&gt;
an unpriviledged user, the kernel/OS say no.&lt;br /&gt;
&lt;br /&gt;
When people make fun of windows being insecure, its not a fundamental flaw&lt;br /&gt;
with the design of windows -- its a little broken, overly complex in some ways --&lt;br /&gt;
but certain design choices along the way in the name of usability, such as&lt;br /&gt;
running as admin users so that users don&#039;t need to be asked to do something&lt;br /&gt;
special to change settings, install software, upgrade, etc.&lt;br /&gt;
This is why we have the current spyware problem.&lt;br /&gt;
&lt;br /&gt;
Vista changes this slightly with the UAC (User Access Control) which runs&lt;br /&gt;
a regular user with full priviledges, but asks you whenever priviledged&lt;br /&gt;
operations need doing -- Yes/No.  And you just click on it.  But users&lt;br /&gt;
still click yes. &lt;br /&gt;
&lt;br /&gt;
And now there&#039;s easy ways to turn off UAC completely.  We&#039;ll talk about this more&lt;br /&gt;
later when we talk about security.&lt;br /&gt;
&lt;br /&gt;
=== System Calls ===&lt;br /&gt;
&lt;br /&gt;
How do you talk to a kernel?&lt;br /&gt;
&lt;br /&gt;
It&#039;s the dictator and you&#039;re a supplicant.  How do you make a timely request&lt;br /&gt;
to the kernel to ask it to please do something?  System calls!&lt;br /&gt;
&lt;br /&gt;
A system call is a standard mechanism for an application to talk to a kernel.&lt;br /&gt;
&lt;br /&gt;
A system call is NOT a function call.  In your APIs and the like, it may look like&lt;br /&gt;
a function call, may be wrapped in one.. But in implementation, they are very&lt;br /&gt;
different.&lt;br /&gt;
&lt;br /&gt;
In order for the kernel to be in control, it has to run with special&lt;br /&gt;
privileges and not give these to the user programs. There are various schemes,&lt;br /&gt;
but the common one is a 1-bit option: User mode, or supervisor mode. User mode&lt;br /&gt;
means that running as a regular program, you can&#039;t talk to the IO/interrupt&lt;br /&gt;
vectors or talk tot he MMU, but you can run instructions and access your own&lt;br /&gt;
memory. When you switch to supervisor mode, then everything is accessible. The&lt;br /&gt;
kernel runs in supervisor mode.&lt;br /&gt;
&lt;br /&gt;
So if you&#039;re cut off and can&#039;t see the kernel, how do you send it a message?&lt;br /&gt;
You might be able to write to a special place in memory that the kernel might&lt;br /&gt;
check periodically, but how do you get it to check now?  Normally the kernel&lt;br /&gt;
is invoked by interrupts.. So as a user program, to invoke the kernel,&lt;br /&gt;
you call an interrupt.  There are special instructions, software interrupts,&lt;br /&gt;
that are like a hardware interrupt, but software initiates them.  There are&lt;br /&gt;
interrupt tables just like for hardware.&lt;br /&gt;
&lt;br /&gt;
So the kernel can then look at the memory of the invoking user program when a&lt;br /&gt;
user program calls the system call. Remember, because of the memory&lt;br /&gt;
protections, you can&#039;t just jump into kernel code, so the only way in is via an&lt;br /&gt;
interrupt.&lt;br /&gt;
&lt;br /&gt;
Therefore, system calls cause interrupts to invoke the kernel.&lt;br /&gt;
&lt;br /&gt;
In the process of doing a system call, the system has to do a lot of &#039;paperwork&#039; &lt;br /&gt;
to change context.  System calls are expensive, very expensive.  This is&lt;br /&gt;
one of the things that tends to bound the performance of an operating system.&lt;br /&gt;
&lt;br /&gt;
Modern CPUs are so fast, shouldn&#039;t they be able to switch really fast?&lt;br /&gt;
Turns out the tricks used to make modern CPUs really fast are like those&lt;br /&gt;
used to make muscle cars -- they tend to go really fast in a straight line,&lt;br /&gt;
but when you want to turn, you have to slow down to nearly a stop.  Modern&lt;br /&gt;
CPUs are like that.&lt;br /&gt;
&lt;br /&gt;
Interrupts cause all partial work done in parallel by modern CPUs to be&lt;br /&gt;
thrown out, such as 10-20 or more instructions. The CPU has to fill the&lt;br /&gt;
pipelines and resume. This stuff happens at a level below that of what the&lt;br /&gt;
kernel runs at.  The kernel saves its registers before switching context,&lt;br /&gt;
so that it can resume later.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Introduction&amp;diff=1439</id>
		<title>Introduction</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Introduction&amp;diff=1439"/>
		<updated>2007-09-16T18:09:48Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction == (continued)&lt;br /&gt;
&lt;br /&gt;
[[Class Outline#Introduction|Last class]], we began talking about turning the machine we have into the machine we want.&lt;br /&gt;
&lt;br /&gt;
What are some properties of the machine that we want?&lt;br /&gt;
&lt;br /&gt;
- usable / accessible&lt;br /&gt;
- stable / reliable&lt;br /&gt;
- functional (access to underlying resources)&lt;br /&gt;
- efficient&lt;br /&gt;
- customizable&lt;br /&gt;
- secure&lt;br /&gt;
- multitasking&lt;br /&gt;
- portability&lt;br /&gt;
&lt;br /&gt;
If you have a computer that doesn&#039;t let you access the hardware, eg. a&lt;br /&gt;
keyboard or video camera, it isn&#039;t very functional. Multitasking is slightly different than efficiency. &lt;br /&gt;
For portability, do you really want to have to rewrite applications to support&lt;br /&gt;
slight variations in the hardware such as different size hard disks and different&lt;br /&gt;
amounts of RAM?&lt;br /&gt;
&lt;br /&gt;
Operating systems don&#039;t do all of these perfectly, but they tend to do a lot&lt;br /&gt;
of these at least acceptably.&lt;br /&gt;
&lt;br /&gt;
If you look at the introduction to the text book, it talks about various&lt;br /&gt;
types of operatings systems.  Some of the operating systems we know about are:&lt;br /&gt;
Linux, Windows, MacOSX, VXWorks, QNX, MS-DOS, Soliaris/xBSD, OS/2, BeOS, VMS,&lt;br /&gt;
MVS, OS/370, AIX, etc.&lt;br /&gt;
&lt;br /&gt;
Linux isn&#039;t a variety of different operating systems to the same degree as the &lt;br /&gt;
different versions of windows, as most linuxes share some components, whereas windows tends not to.&lt;br /&gt;
&lt;br /&gt;
Of the list above, most of them are modern operating systems, except MS-DOS.&lt;br /&gt;
To be a &amp;quot;modern&amp;quot; OS, there are two major qualities:  Does it have protected&lt;br /&gt;
memory, and does it have pre-emptive multitasking?&lt;br /&gt;
&lt;br /&gt;
=== Protected Memory ===&lt;br /&gt;
&lt;br /&gt;
What is protected memory?  &lt;br /&gt;
&lt;br /&gt;
Student: A situation where each program and the operating system has its own memory, and the OS prevents&lt;br /&gt;
other programs from writing to another program&#039;s memory.  &lt;br /&gt;
&lt;br /&gt;
Dr. Somayaji:  Access mechanisms to avoid having one program overwrite another program&#039;s memory.&lt;br /&gt;
&lt;br /&gt;
This lets you have a situation where if one program crashes, you can just restart it.  Damage due to&lt;br /&gt;
memory overwrites is limited to one program.&lt;br /&gt;
&lt;br /&gt;
=== Preemptive Multi-tasking ===&lt;br /&gt;
&lt;br /&gt;
A way to have more than one program run at a time. Older machines were known&lt;br /&gt;
as batch machines and their operating systems were batch operating systems.&lt;br /&gt;
These ran tasks that took a long time to run. These were queued up, and run&lt;br /&gt;
one at a time in sequence.  These were typically things such things as payroll and  accounts receivable.&lt;br /&gt;
Usually these would be run overnight and the output, either on magnetic tape or a&lt;br /&gt;
stack of printouts which would be returned to the user in the morning.&lt;br /&gt;
&lt;br /&gt;
Preemptive multitasking - the OS enforces time sharing.&lt;br /&gt;
Co-operative multitasking  - each program lets others run.&lt;br /&gt;
&lt;br /&gt;
If you look at MS-DOS, there are batch files.  These are just a sequence of&lt;br /&gt;
commands to run.  It runs them and then returns when done.  &lt;br /&gt;
&lt;br /&gt;
If you want to run a GUI, however, a batch system is unlikely to be what you want, as&lt;br /&gt;
a GUI environment tends to be interactive. &lt;br /&gt;
&lt;br /&gt;
With the big iron in the old days they had big computers that would be sitting&lt;br /&gt;
mostly idle, except when running the batch jobs.  The idea of time sharing came along around then.&lt;br /&gt;
&lt;br /&gt;
=== Structure of a computer ===&lt;br /&gt;
&lt;br /&gt;
[[Image:Stored_program_architecture_1.png]]&lt;br /&gt;
Stored Program architecture&lt;br /&gt;
&lt;br /&gt;
The stored program architecture with today&#039;s computers is a bit of a fiction.&lt;br /&gt;
&lt;br /&gt;
The things the microprocessor does are significantly faster than the RAM&lt;br /&gt;
storage.  Modern computers have to wait for data from RAM. However this time is&lt;br /&gt;
dwarfed by the time spent waiting for I/O.  This is because I/O devices tend to be mechanical:&lt;br /&gt;
printers, hard disks, people at keyboards.&lt;br /&gt;
&lt;br /&gt;
This helped cause the idea &amp;quot;what if we had multiple users and let them share the CPU&amp;quot; to come&lt;br /&gt;
about. This is time-sharing. On modern computers, we do this too, but instead&lt;br /&gt;
of sharing with multiple users we run multiple programs for a single user --&lt;br /&gt;
multi-tasking.&lt;br /&gt;
&lt;br /&gt;
In older systems such as Win3.1 and MacOS 9, this was co-operative multi-tasking.&lt;br /&gt;
When things started running, they&#039;d hog the CPU until they decided they were&lt;br /&gt;
ready to give up the CPU.  &lt;br /&gt;
&lt;br /&gt;
There used to be a great feature in the Mac in the old days where if you held&lt;br /&gt;
down the mouse button, no networking would happen.  This was because the&lt;br /&gt;
program running at the time was hogging the CPU when the mouse button was pressed.&lt;br /&gt;
In pre-emptive multitasking, you get booted out periodically so that the&lt;br /&gt;
system can spend time paying attention to the network, to do animations, or&lt;br /&gt;
to let other applications run.  It spends a millisecond here, a millisecond there,&lt;br /&gt;
etc.  Instead of actually running simultaneously, they&#039;re periodically running, but &lt;br /&gt;
they seem to run simultaneously to the end-user.&lt;br /&gt;
&lt;br /&gt;
Sometimes you have 2 or more CPUs, but you have more than 2 things going on... &lt;br /&gt;
&lt;br /&gt;
=== Processes and the Kernel ===&lt;br /&gt;
&lt;br /&gt;
Processes are fundamentally the things that get multitasked and protected. It&lt;br /&gt;
is the abstraction of a running program. This is what makes an operating&lt;br /&gt;
system modern. In the old days, you had one memory space and the OS and its&lt;br /&gt;
applications were all sharing the CPU and memory. Now, with a process model,&lt;br /&gt;
there are barriers all over the place, and more importantly, something/someone&lt;br /&gt;
in charge governing the process. It&#039;s not a free-for all, it has a dictator,&lt;br /&gt;
and its name is the kernel.&lt;br /&gt;
&lt;br /&gt;
Kernel as in the center piece.&lt;br /&gt;
&lt;br /&gt;
Question to class: How many people have heard of the term Microkernel? &lt;br /&gt;
Not many hands.&lt;br /&gt;
&lt;br /&gt;
There are various terms that modify the term kernel such as monolithic kernel, microkernel,&lt;br /&gt;
picokernel, etc. These specify how much stuff is in the microkernel.&lt;br /&gt;
The idea is that the more code is in the kernel, the faster it goes, but&lt;br /&gt;
conversely, that the more code there is the risk of crashing is higher.&lt;br /&gt;
&lt;br /&gt;
All of the problem code goes into processes, as they can be restarted, and kept&lt;br /&gt;
out of the kernel.&lt;br /&gt;
&lt;br /&gt;
The debate about what is faster is not fully settled for technical and philosophical reasons.&lt;br /&gt;
Almost all operating systems on the list above are big kernels, not small ones.&lt;br /&gt;
&lt;br /&gt;
So if that&#039;s what a kernel is, how does a program fit into that?&lt;br /&gt;
If there&#039;s one program to rule them all, where do processes fit in?&lt;br /&gt;
The kernel decides who gets to run, that implements a priority scheme.&lt;br /&gt;
&lt;br /&gt;
Student:  &amp;quot;It got there first.  You start the computer, then the kernel gets in.  Everything has to&lt;br /&gt;
talk to it or it doesn&#039;t run...&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It gets to set the rules... that&#039;s sort of it...  &lt;br /&gt;
In unix, there&#039;s the idea of the init process.  It is first to run, and has&lt;br /&gt;
special responsibilities.  It is run using a regular binary, at system boot,&lt;br /&gt;
by the kernel.  This still doesn&#039;t tell us how the kernel keeps control of it.&lt;br /&gt;
&lt;br /&gt;
The kernel often keeps control by getting the hardware to help.  By loading first,&lt;br /&gt;
the kernel can setup the CPU and memory so that it has control. This&lt;br /&gt;
type of hardware assistance is generally available to the first code to&lt;br /&gt;
request it.&lt;br /&gt;
&lt;br /&gt;
=== Interrupts ===&lt;br /&gt;
&lt;br /&gt;
Interrupts -- what are they?  It&#039;s an alert to say something has to be done now.&lt;br /&gt;
&lt;br /&gt;
A CPU is running the programs, until something happens, like someone pressing&lt;br /&gt;
a key or a network packet arrives.  So an I/O device flags an interrupt.  The CPU&lt;br /&gt;
now has to stop and pay attention&lt;br /&gt;
&lt;br /&gt;
An interrupt is just a mechanism to allow the CPU to change contexts, to switch&lt;br /&gt;
from running one bit of code to another.  There&#039;s a standard set of interrupts&lt;br /&gt;
defined by the hardware.  Associated with each interrupt there&#039;s a bit of code.&lt;br /&gt;
When one interrupts happens, run its code, when another happens, run another.  &lt;br /&gt;
For example, for the keyboard, there&#039;s a routine to read a key from the keyboard, store it in&lt;br /&gt;
a buffer so its not overwritten when the next key is pressed, then returns.&lt;br /&gt;
&lt;br /&gt;
Think of an interrupt as a little kid pulling at your pant leg.  It wants your attention now.&lt;br /&gt;
&lt;br /&gt;
The OS controls interrupts to control the CPU (and also what happens with RAM).&lt;br /&gt;
&lt;br /&gt;
Wait a second?  If the kernel can only control interrupts, how can it keep&lt;br /&gt;
general control if no interrupts happen?  The clock IO device!  It throws&lt;br /&gt;
interrupts too.&lt;br /&gt;
&lt;br /&gt;
As a part of the boot sequence, the kernel programs the clock to wake the&lt;br /&gt;
operating system up every, say, 100th of a second.  Call me!  So the OS can&lt;br /&gt;
then keep running and perform its tasks as it needs:&lt;br /&gt;
&amp;quot;Is everyone behaving nicely?  Do I need to kill anyone?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[[Image:Stored_program_architecture_2.png]]&lt;br /&gt;
&lt;br /&gt;
=== Virtual Memory ===&lt;br /&gt;
&lt;br /&gt;
A slight fly in the ointment: If you are a program, and want to take control,&lt;br /&gt;
how do you mount a rebellion?  You overwrite the interrupt table! This is&lt;br /&gt;
where protected memory comes in.  It prevents a regular program from doing this.&lt;br /&gt;
As a regular process, you often can&#039;t even see the interrupt table.&lt;br /&gt;
&lt;br /&gt;
How is that possible? Many schemes have been proposed for doing protected&lt;br /&gt;
memory.  Some variants will be spoken about, but the most widespread method &lt;br /&gt;
is something known as virtual memory.  Often tied into the concept of&lt;br /&gt;
virtual memory is the ability to use disk for memory too.&lt;br /&gt;
&lt;br /&gt;
The fundamental idea is that the address you think your instruction or variable is at in memory is&lt;br /&gt;
fictional/virtual.  Say you want to load from address 2000 and load it into a register, and you&lt;br /&gt;
have another program that want to do the same thing, are they doing the same thing?&lt;br /&gt;
Nope!  They have nothing to do with each other in a virtual memory model.&lt;br /&gt;
Both programs live in their own virtual worlds, and can&#039;t see each other.&lt;br /&gt;
The kernel, with the help of a little piece of hardware called the MMU &lt;br /&gt;
is able to give each process its own virtual view of memory.  It decides&lt;br /&gt;
how that&#039;s going to map to real memory as it sees it.&lt;br /&gt;
&lt;br /&gt;
So the kernel controls interrupts to control IO and it controls memory.&lt;br /&gt;
These are the two key controls.  If a kernel can&#039;t control these, it&lt;br /&gt;
can&#039;t properly provide protections (&amp;quot;It can&#039;t stop the rebellion&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Last class we talked about hypervisors.  The whole idea there is that the&lt;br /&gt;
kernel thinks that it has control of the interrupt table and the real&lt;br /&gt;
MMU are actually virtual ones, provided by the Hypervisor.&lt;br /&gt;
&lt;br /&gt;
So you can now run windows inside a window on Linux, OSX, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The difference between the various versions of windows:&lt;br /&gt;
&lt;br /&gt;
- Windows 95, Windows 98 implemented these ideas for some programs, but not all, and&lt;br /&gt;
could get around it easily.&lt;br /&gt;
- Windows 3.1 didn&#039;t have these.&lt;br /&gt;
- Windows XP, Vista are modern.&lt;br /&gt;
&lt;br /&gt;
There&#039;s one small problem with Vista and XP, however.  &lt;br /&gt;
This has to do with the nature of the processes:&lt;br /&gt;
&lt;br /&gt;
[[Image:Processes_and_kernel.png‎]]&lt;br /&gt;
&lt;br /&gt;
To upgrade things, the kernel trusts some programs/users to allow them to&lt;br /&gt;
upgrade.  In windows, you tend to run as the user admin.  This means you&#039;re&lt;br /&gt;
running the equivalent of the unix root command.  The kernel listens to you&lt;br /&gt;
and does just about anything you want, including install programs. &lt;br /&gt;
&lt;br /&gt;
Say that cute Christmas animation.. which happens to install a keylogger&lt;br /&gt;
to send all your keystrokes to the other side of the world, so they can&lt;br /&gt;
log into your bank account.&lt;br /&gt;
&lt;br /&gt;
In unix, there&#039;s the concept of root users and non-root users.  Root can&lt;br /&gt;
ask for almost anything to be done, including change its code.  If you can&lt;br /&gt;
tell the kernel to load new code, you can pretty much do anything. As&lt;br /&gt;
an unpriviledged user, the kernel/OS say no.&lt;br /&gt;
&lt;br /&gt;
When people make fun of windows being insecure, its not a fundamental flaw&lt;br /&gt;
with the design of windows -- its a little broken, overly complex in some ways --&lt;br /&gt;
but certain design choices along the way in the name of usability, such as&lt;br /&gt;
running as admin users so that users don&#039;t need to be asked to do something&lt;br /&gt;
special to change settings, install software, upgrade, etc.&lt;br /&gt;
This is why we have the current spyware problem.&lt;br /&gt;
&lt;br /&gt;
Vista changes this slightly with the UAC (User Access Control) which runs&lt;br /&gt;
a regular user with full priviledges, but asks you whenever priviledged&lt;br /&gt;
operations need doing -- Yes/No.  And you just click on it.  But users&lt;br /&gt;
still click yes. &lt;br /&gt;
&lt;br /&gt;
And now there&#039;s easy ways to turn off UAC completely.  We&#039;ll talk about this more&lt;br /&gt;
later when we talk about security.&lt;br /&gt;
&lt;br /&gt;
=== System Calls ===&lt;br /&gt;
&lt;br /&gt;
How do you talk to a kernel?&lt;br /&gt;
&lt;br /&gt;
It&#039;s the dictator and you&#039;re a supplicant.  How do you make a timely request&lt;br /&gt;
to the kernel to ask it to please do something?  System calls!&lt;br /&gt;
&lt;br /&gt;
A system call is a standard mechanism for an application to talk to a kernel.&lt;br /&gt;
&lt;br /&gt;
A system call is NOT a function call.  In your APIs and the like, it may look like&lt;br /&gt;
a function call, may be wrapped in one.. But in implementation, they are very&lt;br /&gt;
different.&lt;br /&gt;
&lt;br /&gt;
In order for the kernel to be in control, it has to run with special&lt;br /&gt;
privileges and not give these to the user programs. There are various schemes,&lt;br /&gt;
but the common one is a 1-bit option: User mode, or supervisor mode. User mode&lt;br /&gt;
means that running as a regular program, you can&#039;t talk to the IO/interrupt&lt;br /&gt;
vectors or talk tot he MMU, but you can run instructions and access your own&lt;br /&gt;
memory. When you switch to supervisor mode, then everything is accessible. The&lt;br /&gt;
kernel runs in supervisor mode.&lt;br /&gt;
&lt;br /&gt;
So if you&#039;re cut off and can&#039;t see the kernel, how do you send it a message?&lt;br /&gt;
You might be able to write to a special place in memory that the kernel might&lt;br /&gt;
check periodically, but how do you get it to check now?  Normally the kernel&lt;br /&gt;
is invoked by interrupts.. So as a user program, to invoke the kernel,&lt;br /&gt;
you call an interrupt.  There are special instructions, software interrupts,&lt;br /&gt;
that are like a hardware interrupt, but software initiates them.  There are&lt;br /&gt;
interrupt tables just like for hardware.&lt;br /&gt;
&lt;br /&gt;
So the kernel can then look at the memory of the invoking user program when a&lt;br /&gt;
user program calls the system call. Remember, because of the memory&lt;br /&gt;
protections, you can&#039;t just jump into kernel code, so the only way in is via an&lt;br /&gt;
interrupt.&lt;br /&gt;
&lt;br /&gt;
Therefore, system calls cause interrupts to invoke the kernel.&lt;br /&gt;
&lt;br /&gt;
In the process of doing a system call, the system has to do a lot of &#039;paperwork&#039; &lt;br /&gt;
to change context.  System calls are expensive, very expensive.  This is&lt;br /&gt;
one of the things that tends to bound the performance of an operating system.&lt;br /&gt;
&lt;br /&gt;
Modern CPUs are so fast, shouldn&#039;t they be able to switch really fast?&lt;br /&gt;
Turns out the tricks used to make modern CPUs really fast are like those&lt;br /&gt;
used to make muscle cars -- they tend to go really fast in a straight line,&lt;br /&gt;
but when you want to turn, you have to slow down to nearly a stop.  Modern&lt;br /&gt;
CPUs are like that.&lt;br /&gt;
&lt;br /&gt;
Interrupts cause all partial work done in parallel by modern CPUs to be&lt;br /&gt;
thrown out, such as 10-20 or more instructions. The CPU has to fill the&lt;br /&gt;
pipelines and resume. This stuff happens at a level below that of what the&lt;br /&gt;
kernel runs at.  The kernel saves its registers before switching context,&lt;br /&gt;
so that it can resume later.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Introduction&amp;diff=1438</id>
		<title>Introduction</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Introduction&amp;diff=1438"/>
		<updated>2007-09-16T18:09:28Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
[[Class Outline#Introduction|Last class]], we began talking about turning the machine we have into the machine we want.&lt;br /&gt;
&lt;br /&gt;
What are some properties of the machine that we want?&lt;br /&gt;
&lt;br /&gt;
- usable / accessible&lt;br /&gt;
- stable / reliable&lt;br /&gt;
- functional (access to underlying resources)&lt;br /&gt;
- efficient&lt;br /&gt;
- customizable&lt;br /&gt;
- secure&lt;br /&gt;
- multitasking&lt;br /&gt;
- portability&lt;br /&gt;
&lt;br /&gt;
If you have a computer that doesn&#039;t let you access the hardware, eg. a&lt;br /&gt;
keyboard or video camera, it isn&#039;t very functional. Multitasking is slightly different than efficiency. &lt;br /&gt;
For portability, do you really want to have to rewrite applications to support&lt;br /&gt;
slight variations in the hardware such as different size hard disks and different&lt;br /&gt;
amounts of RAM?&lt;br /&gt;
&lt;br /&gt;
Operating systems don&#039;t do all of these perfectly, but they tend to do a lot&lt;br /&gt;
of these at least acceptably.&lt;br /&gt;
&lt;br /&gt;
If you look at the introduction to the text book, it talks about various&lt;br /&gt;
types of operatings systems.  Some of the operating systems we know about are:&lt;br /&gt;
Linux, Windows, MacOSX, VXWorks, QNX, MS-DOS, Soliaris/xBSD, OS/2, BeOS, VMS,&lt;br /&gt;
MVS, OS/370, AIX, etc.&lt;br /&gt;
&lt;br /&gt;
Linux isn&#039;t a variety of different operating systems to the same degree as the &lt;br /&gt;
different versions of windows, as most linuxes share some components, whereas windows tends not to.&lt;br /&gt;
&lt;br /&gt;
Of the list above, most of them are modern operating systems, except MS-DOS.&lt;br /&gt;
To be a &amp;quot;modern&amp;quot; OS, there are two major qualities:  Does it have protected&lt;br /&gt;
memory, and does it have pre-emptive multitasking?&lt;br /&gt;
&lt;br /&gt;
=== Protected Memory ===&lt;br /&gt;
&lt;br /&gt;
What is protected memory?  &lt;br /&gt;
&lt;br /&gt;
Student: A situation where each program and the operating system has its own memory, and the OS prevents&lt;br /&gt;
other programs from writing to another program&#039;s memory.  &lt;br /&gt;
&lt;br /&gt;
Dr. Somayaji:  Access mechanisms to avoid having one program overwrite another program&#039;s memory.&lt;br /&gt;
&lt;br /&gt;
This lets you have a situation where if one program crashes, you can just restart it.  Damage due to&lt;br /&gt;
memory overwrites is limited to one program.&lt;br /&gt;
&lt;br /&gt;
=== Preemptive Multi-tasking ===&lt;br /&gt;
&lt;br /&gt;
A way to have more than one program run at a time. Older machines were known&lt;br /&gt;
as batch machines and their operating systems were batch operating systems.&lt;br /&gt;
These ran tasks that took a long time to run. These were queued up, and run&lt;br /&gt;
one at a time in sequence.  These were typically things such things as payroll and  accounts receivable.&lt;br /&gt;
Usually these would be run overnight and the output, either on magnetic tape or a&lt;br /&gt;
stack of printouts which would be returned to the user in the morning.&lt;br /&gt;
&lt;br /&gt;
Preemptive multitasking - the OS enforces time sharing.&lt;br /&gt;
Co-operative multitasking  - each program lets others run.&lt;br /&gt;
&lt;br /&gt;
If you look at MS-DOS, there are batch files.  These are just a sequence of&lt;br /&gt;
commands to run.  It runs them and then returns when done.  &lt;br /&gt;
&lt;br /&gt;
If you want to run a GUI, however, a batch system is unlikely to be what you want, as&lt;br /&gt;
a GUI environment tends to be interactive. &lt;br /&gt;
&lt;br /&gt;
With the big iron in the old days they had big computers that would be sitting&lt;br /&gt;
mostly idle, except when running the batch jobs.  The idea of time sharing came along around then.&lt;br /&gt;
&lt;br /&gt;
=== Structure of a computer ===&lt;br /&gt;
&lt;br /&gt;
[[Image:Stored_program_architecture_1.png]]&lt;br /&gt;
Stored Program architecture&lt;br /&gt;
&lt;br /&gt;
The stored program architecture with today&#039;s computers is a bit of a fiction.&lt;br /&gt;
&lt;br /&gt;
The things the microprocessor does are significantly faster than the RAM&lt;br /&gt;
storage.  Modern computers have to wait for data from RAM. However this time is&lt;br /&gt;
dwarfed by the time spent waiting for I/O.  This is because I/O devices tend to be mechanical:&lt;br /&gt;
printers, hard disks, people at keyboards.&lt;br /&gt;
&lt;br /&gt;
This helped cause the idea &amp;quot;what if we had multiple users and let them share the CPU&amp;quot; to come&lt;br /&gt;
about. This is time-sharing. On modern computers, we do this too, but instead&lt;br /&gt;
of sharing with multiple users we run multiple programs for a single user --&lt;br /&gt;
multi-tasking.&lt;br /&gt;
&lt;br /&gt;
In older systems such as Win3.1 and MacOS 9, this was co-operative multi-tasking.&lt;br /&gt;
When things started running, they&#039;d hog the CPU until they decided they were&lt;br /&gt;
ready to give up the CPU.  &lt;br /&gt;
&lt;br /&gt;
There used to be a great feature in the Mac in the old days where if you held&lt;br /&gt;
down the mouse button, no networking would happen.  This was because the&lt;br /&gt;
program running at the time was hogging the CPU when the mouse button was pressed.&lt;br /&gt;
In pre-emptive multitasking, you get booted out periodically so that the&lt;br /&gt;
system can spend time paying attention to the network, to do animations, or&lt;br /&gt;
to let other applications run.  It spends a millisecond here, a millisecond there,&lt;br /&gt;
etc.  Instead of actually running simultaneously, they&#039;re periodically running, but &lt;br /&gt;
they seem to run simultaneously to the end-user.&lt;br /&gt;
&lt;br /&gt;
Sometimes you have 2 or more CPUs, but you have more than 2 things going on... &lt;br /&gt;
&lt;br /&gt;
=== Processes and the Kernel ===&lt;br /&gt;
&lt;br /&gt;
Processes are fundamentally the things that get multitasked and protected. It&lt;br /&gt;
is the abstraction of a running program. This is what makes an operating&lt;br /&gt;
system modern. In the old days, you had one memory space and the OS and its&lt;br /&gt;
applications were all sharing the CPU and memory. Now, with a process model,&lt;br /&gt;
there are barriers all over the place, and more importantly, something/someone&lt;br /&gt;
in charge governing the process. It&#039;s not a free-for all, it has a dictator,&lt;br /&gt;
and its name is the kernel.&lt;br /&gt;
&lt;br /&gt;
Kernel as in the center piece.&lt;br /&gt;
&lt;br /&gt;
Question to class: How many people have heard of the term Microkernel? &lt;br /&gt;
Not many hands.&lt;br /&gt;
&lt;br /&gt;
There are various terms that modify the term kernel such as monolithic kernel, microkernel,&lt;br /&gt;
picokernel, etc. These specify how much stuff is in the microkernel.&lt;br /&gt;
The idea is that the more code is in the kernel, the faster it goes, but&lt;br /&gt;
conversely, that the more code there is the risk of crashing is higher.&lt;br /&gt;
&lt;br /&gt;
All of the problem code goes into processes, as they can be restarted, and kept&lt;br /&gt;
out of the kernel.&lt;br /&gt;
&lt;br /&gt;
The debate about what is faster is not fully settled for technical and philosophical reasons.&lt;br /&gt;
Almost all operating systems on the list above are big kernels, not small ones.&lt;br /&gt;
&lt;br /&gt;
So if that&#039;s what a kernel is, how does a program fit into that?&lt;br /&gt;
If there&#039;s one program to rule them all, where do processes fit in?&lt;br /&gt;
The kernel decides who gets to run, that implements a priority scheme.&lt;br /&gt;
&lt;br /&gt;
Student:  &amp;quot;It got there first.  You start the computer, then the kernel gets in.  Everything has to&lt;br /&gt;
talk to it or it doesn&#039;t run...&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It gets to set the rules... that&#039;s sort of it...  &lt;br /&gt;
In unix, there&#039;s the idea of the init process.  It is first to run, and has&lt;br /&gt;
special responsibilities.  It is run using a regular binary, at system boot,&lt;br /&gt;
by the kernel.  This still doesn&#039;t tell us how the kernel keeps control of it.&lt;br /&gt;
&lt;br /&gt;
The kernel often keeps control by getting the hardware to help.  By loading first,&lt;br /&gt;
the kernel can setup the CPU and memory so that it has control. This&lt;br /&gt;
type of hardware assistance is generally available to the first code to&lt;br /&gt;
request it.&lt;br /&gt;
&lt;br /&gt;
=== Interrupts ===&lt;br /&gt;
&lt;br /&gt;
Interrupts -- what are they?  It&#039;s an alert to say something has to be done now.&lt;br /&gt;
&lt;br /&gt;
A CPU is running the programs, until something happens, like someone pressing&lt;br /&gt;
a key or a network packet arrives.  So an I/O device flags an interrupt.  The CPU&lt;br /&gt;
now has to stop and pay attention&lt;br /&gt;
&lt;br /&gt;
An interrupt is just a mechanism to allow the CPU to change contexts, to switch&lt;br /&gt;
from running one bit of code to another.  There&#039;s a standard set of interrupts&lt;br /&gt;
defined by the hardware.  Associated with each interrupt there&#039;s a bit of code.&lt;br /&gt;
When one interrupts happens, run its code, when another happens, run another.  &lt;br /&gt;
For example, for the keyboard, there&#039;s a routine to read a key from the keyboard, store it in&lt;br /&gt;
a buffer so its not overwritten when the next key is pressed, then returns.&lt;br /&gt;
&lt;br /&gt;
Think of an interrupt as a little kid pulling at your pant leg.  It wants your attention now.&lt;br /&gt;
&lt;br /&gt;
The OS controls interrupts to control the CPU (and also what happens with RAM).&lt;br /&gt;
&lt;br /&gt;
Wait a second?  If the kernel can only control interrupts, how can it keep&lt;br /&gt;
general control if no interrupts happen?  The clock IO device!  It throws&lt;br /&gt;
interrupts too.&lt;br /&gt;
&lt;br /&gt;
As a part of the boot sequence, the kernel programs the clock to wake the&lt;br /&gt;
operating system up every, say, 100th of a second.  Call me!  So the OS can&lt;br /&gt;
then keep running and perform its tasks as it needs:&lt;br /&gt;
&amp;quot;Is everyone behaving nicely?  Do I need to kill anyone?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[[Image:Stored_program_architecture_2.png]]&lt;br /&gt;
&lt;br /&gt;
=== Virtual Memory ===&lt;br /&gt;
&lt;br /&gt;
A slight fly in the ointment: If you are a program, and want to take control,&lt;br /&gt;
how do you mount a rebellion?  You overwrite the interrupt table! This is&lt;br /&gt;
where protected memory comes in.  It prevents a regular program from doing this.&lt;br /&gt;
As a regular process, you often can&#039;t even see the interrupt table.&lt;br /&gt;
&lt;br /&gt;
How is that possible? Many schemes have been proposed for doing protected&lt;br /&gt;
memory.  Some variants will be spoken about, but the most widespread method &lt;br /&gt;
is something known as virtual memory.  Often tied into the concept of&lt;br /&gt;
virtual memory is the ability to use disk for memory too.&lt;br /&gt;
&lt;br /&gt;
The fundamental idea is that the address you think your instruction or variable is at in memory is&lt;br /&gt;
fictional/virtual.  Say you want to load from address 2000 and load it into a register, and you&lt;br /&gt;
have another program that want to do the same thing, are they doing the same thing?&lt;br /&gt;
Nope!  They have nothing to do with each other in a virtual memory model.&lt;br /&gt;
Both programs live in their own virtual worlds, and can&#039;t see each other.&lt;br /&gt;
The kernel, with the help of a little piece of hardware called the MMU &lt;br /&gt;
is able to give each process its own virtual view of memory.  It decides&lt;br /&gt;
how that&#039;s going to map to real memory as it sees it.&lt;br /&gt;
&lt;br /&gt;
So the kernel controls interrupts to control IO and it controls memory.&lt;br /&gt;
These are the two key controls.  If a kernel can&#039;t control these, it&lt;br /&gt;
can&#039;t properly provide protections (&amp;quot;It can&#039;t stop the rebellion&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Last class we talked about hypervisors.  The whole idea there is that the&lt;br /&gt;
kernel thinks that it has control of the interrupt table and the real&lt;br /&gt;
MMU are actually virtual ones, provided by the Hypervisor.&lt;br /&gt;
&lt;br /&gt;
So you can now run windows inside a window on Linux, OSX, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The difference between the various versions of windows:&lt;br /&gt;
&lt;br /&gt;
- Windows 95, Windows 98 implemented these ideas for some programs, but not all, and&lt;br /&gt;
could get around it easily.&lt;br /&gt;
- Windows 3.1 didn&#039;t have these.&lt;br /&gt;
- Windows XP, Vista are modern.&lt;br /&gt;
&lt;br /&gt;
There&#039;s one small problem with Vista and XP, however.  &lt;br /&gt;
This has to do with the nature of the processes:&lt;br /&gt;
&lt;br /&gt;
[[Image:Processes_and_kernel.png‎]]&lt;br /&gt;
&lt;br /&gt;
To upgrade things, the kernel trusts some programs/users to allow them to&lt;br /&gt;
upgrade.  In windows, you tend to run as the user admin.  This means you&#039;re&lt;br /&gt;
running the equivalent of the unix root command.  The kernel listens to you&lt;br /&gt;
and does just about anything you want, including install programs. &lt;br /&gt;
&lt;br /&gt;
Say that cute Christmas animation.. which happens to install a keylogger&lt;br /&gt;
to send all your keystrokes to the other side of the world, so they can&lt;br /&gt;
log into your bank account.&lt;br /&gt;
&lt;br /&gt;
In unix, there&#039;s the concept of root users and non-root users.  Root can&lt;br /&gt;
ask for almost anything to be done, including change its code.  If you can&lt;br /&gt;
tell the kernel to load new code, you can pretty much do anything. As&lt;br /&gt;
an unpriviledged user, the kernel/OS say no.&lt;br /&gt;
&lt;br /&gt;
When people make fun of windows being insecure, its not a fundamental flaw&lt;br /&gt;
with the design of windows -- its a little broken, overly complex in some ways --&lt;br /&gt;
but certain design choices along the way in the name of usability, such as&lt;br /&gt;
running as admin users so that users don&#039;t need to be asked to do something&lt;br /&gt;
special to change settings, install software, upgrade, etc.&lt;br /&gt;
This is why we have the current spyware problem.&lt;br /&gt;
&lt;br /&gt;
Vista changes this slightly with the UAC (User Access Control) which runs&lt;br /&gt;
a regular user with full priviledges, but asks you whenever priviledged&lt;br /&gt;
operations need doing -- Yes/No.  And you just click on it.  But users&lt;br /&gt;
still click yes. &lt;br /&gt;
&lt;br /&gt;
And now there&#039;s easy ways to turn off UAC completely.  We&#039;ll talk about this more&lt;br /&gt;
later when we talk about security.&lt;br /&gt;
&lt;br /&gt;
=== System Calls ===&lt;br /&gt;
&lt;br /&gt;
How do you talk to a kernel?&lt;br /&gt;
&lt;br /&gt;
It&#039;s the dictator and you&#039;re a supplicant.  How do you make a timely request&lt;br /&gt;
to the kernel to ask it to please do something?  System calls!&lt;br /&gt;
&lt;br /&gt;
A system call is a standard mechanism for an application to talk to a kernel.&lt;br /&gt;
&lt;br /&gt;
A system call is NOT a function call.  In your APIs and the like, it may look like&lt;br /&gt;
a function call, may be wrapped in one.. But in implementation, they are very&lt;br /&gt;
different.&lt;br /&gt;
&lt;br /&gt;
In order for the kernel to be in control, it has to run with special&lt;br /&gt;
privileges and not give these to the user programs. There are various schemes,&lt;br /&gt;
but the common one is a 1-bit option: User mode, or supervisor mode. User mode&lt;br /&gt;
means that running as a regular program, you can&#039;t talk to the IO/interrupt&lt;br /&gt;
vectors or talk tot he MMU, but you can run instructions and access your own&lt;br /&gt;
memory. When you switch to supervisor mode, then everything is accessible. The&lt;br /&gt;
kernel runs in supervisor mode.&lt;br /&gt;
&lt;br /&gt;
So if you&#039;re cut off and can&#039;t see the kernel, how do you send it a message?&lt;br /&gt;
You might be able to write to a special place in memory that the kernel might&lt;br /&gt;
check periodically, but how do you get it to check now?  Normally the kernel&lt;br /&gt;
is invoked by interrupts.. So as a user program, to invoke the kernel,&lt;br /&gt;
you call an interrupt.  There are special instructions, software interrupts,&lt;br /&gt;
that are like a hardware interrupt, but software initiates them.  There are&lt;br /&gt;
interrupt tables just like for hardware.&lt;br /&gt;
&lt;br /&gt;
So the kernel can then look at the memory of the invoking user program when a&lt;br /&gt;
user program calls the system call. Remember, because of the memory&lt;br /&gt;
protections, you can&#039;t just jump into kernel code, so the only way in is via an&lt;br /&gt;
interrupt.&lt;br /&gt;
&lt;br /&gt;
Therefore, system calls cause interrupts to invoke the kernel.&lt;br /&gt;
&lt;br /&gt;
In the process of doing a system call, the system has to do a lot of &#039;paperwork&#039; &lt;br /&gt;
to change context.  System calls are expensive, very expensive.  This is&lt;br /&gt;
one of the things that tends to bound the performance of an operating system.&lt;br /&gt;
&lt;br /&gt;
Modern CPUs are so fast, shouldn&#039;t they be able to switch really fast?&lt;br /&gt;
Turns out the tricks used to make modern CPUs really fast are like those&lt;br /&gt;
used to make muscle cars -- they tend to go really fast in a straight line,&lt;br /&gt;
but when you want to turn, you have to slow down to nearly a stop.  Modern&lt;br /&gt;
CPUs are like that.&lt;br /&gt;
&lt;br /&gt;
Interrupts cause all partial work done in parallel by modern CPUs to be&lt;br /&gt;
thrown out, such as 10-20 or more instructions. The CPU has to fill the&lt;br /&gt;
pipelines and resume. This stuff happens at a level below that of what the&lt;br /&gt;
kernel runs at.  The kernel saves its registers before switching context,&lt;br /&gt;
so that it can resume later.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Introduction&amp;diff=1437</id>
		<title>Introduction</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Introduction&amp;diff=1437"/>
		<updated>2007-09-16T18:07:27Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Last class, we began talking about turning the machine we have into the machine we want.&lt;br /&gt;
&lt;br /&gt;
What are some properties of the machine that we want?&lt;br /&gt;
&lt;br /&gt;
- usable / accessible&lt;br /&gt;
- stable / reliable&lt;br /&gt;
- functional (access to underlying resources)&lt;br /&gt;
- efficient&lt;br /&gt;
- customizable&lt;br /&gt;
- secure&lt;br /&gt;
- multitasking&lt;br /&gt;
- portability&lt;br /&gt;
&lt;br /&gt;
If you have a computer that doesn&#039;t let you access the hardware, eg. a&lt;br /&gt;
keyboard or video camera, it isn&#039;t very functional. Multitasking is slightly different than efficiency. &lt;br /&gt;
For portability, do you really want to have to rewrite applications to support&lt;br /&gt;
slight variations in the hardware such as different size hard disks and different&lt;br /&gt;
amounts of RAM?&lt;br /&gt;
&lt;br /&gt;
Operating systems don&#039;t do all of these perfectly, but they tend to do a lot&lt;br /&gt;
of these at least acceptably.&lt;br /&gt;
&lt;br /&gt;
If you look at the introduction to the text book, it talks about various&lt;br /&gt;
types of operatings systems.  Some of the operating systems we know about are:&lt;br /&gt;
Linux, Windows, MacOSX, VXWorks, QNX, MS-DOS, Soliaris/xBSD, OS/2, BeOS, VMS,&lt;br /&gt;
MVS, OS/370, AIX, etc.&lt;br /&gt;
&lt;br /&gt;
Linux isn&#039;t a variety of different operating systems to the same degree as the &lt;br /&gt;
different versions of windows, as most linuxes share some components, whereas windows tends not to.&lt;br /&gt;
&lt;br /&gt;
Of the list above, most of them are modern operating systems, except MS-DOS.&lt;br /&gt;
To be a &amp;quot;modern&amp;quot; OS, there are two major qualities:  Does it have protected&lt;br /&gt;
memory, and does it have pre-emptive multitasking?&lt;br /&gt;
&lt;br /&gt;
=== Protected Memory ===&lt;br /&gt;
&lt;br /&gt;
What is protected memory?  &lt;br /&gt;
&lt;br /&gt;
Student: A situation where each program and the operating system has its own memory, and the OS prevents&lt;br /&gt;
other programs from writing to another program&#039;s memory.  &lt;br /&gt;
&lt;br /&gt;
Dr. Somayaji:  Access mechanisms to avoid having one program overwrite another program&#039;s memory.&lt;br /&gt;
&lt;br /&gt;
This lets you have a situation where if one program crashes, you can just restart it.  Damage due to&lt;br /&gt;
memory overwrites is limited to one program.&lt;br /&gt;
&lt;br /&gt;
=== Preemptive Multi-tasking ===&lt;br /&gt;
&lt;br /&gt;
A way to have more than one program run at a time. Older machines were known&lt;br /&gt;
as batch machines and their operating systems were batch operating systems.&lt;br /&gt;
These ran tasks that took a long time to run. These were queued up, and run&lt;br /&gt;
one at a time in sequence.  These were typically things such things as payroll and  accounts receivable.&lt;br /&gt;
Usually these would be run overnight and the output, either on magnetic tape or a&lt;br /&gt;
stack of printouts which would be returned to the user in the morning.&lt;br /&gt;
&lt;br /&gt;
Preemptive multitasking - the OS enforces time sharing.&lt;br /&gt;
Co-operative multitasking  - each program lets others run.&lt;br /&gt;
&lt;br /&gt;
If you look at MS-DOS, there are batch files.  These are just a sequence of&lt;br /&gt;
commands to run.  It runs them and then returns when done.  &lt;br /&gt;
&lt;br /&gt;
If you want to run a GUI, however, a batch system is unlikely to be what you want, as&lt;br /&gt;
a GUI environment tends to be interactive. &lt;br /&gt;
&lt;br /&gt;
With the big iron in the old days they had big computers that would be sitting&lt;br /&gt;
mostly idle, except when running the batch jobs.  The idea of time sharing came along around then.&lt;br /&gt;
&lt;br /&gt;
=== Structure of a computer ===&lt;br /&gt;
&lt;br /&gt;
[[Image:Stored_program_architecture_1.png]]&lt;br /&gt;
Stored Program architecture&lt;br /&gt;
&lt;br /&gt;
The stored program architecture with today&#039;s computers is a bit of a fiction.&lt;br /&gt;
&lt;br /&gt;
The things the microprocessor does are significantly faster than the RAM&lt;br /&gt;
storage.  Modern computers have to wait for data from RAM. However this time is&lt;br /&gt;
dwarfed by the time spent waiting for I/O.  This is because I/O devices tend to be mechanical:&lt;br /&gt;
printers, hard disks, people at keyboards.&lt;br /&gt;
&lt;br /&gt;
This helped cause the idea &amp;quot;what if we had multiple users and let them share the CPU&amp;quot; to come&lt;br /&gt;
about. This is time-sharing. On modern computers, we do this too, but instead&lt;br /&gt;
of sharing with multiple users we run multiple programs for a single user --&lt;br /&gt;
multi-tasking.&lt;br /&gt;
&lt;br /&gt;
In older systems such as Win3.1 and MacOS 9, this was co-operative multi-tasking.&lt;br /&gt;
When things started running, they&#039;d hog the CPU until they decided they were&lt;br /&gt;
ready to give up the CPU.  &lt;br /&gt;
&lt;br /&gt;
There used to be a great feature in the Mac in the old days where if you held&lt;br /&gt;
down the mouse button, no networking would happen.  This was because the&lt;br /&gt;
program running at the time was hogging the CPU when the mouse button was pressed.&lt;br /&gt;
In pre-emptive multitasking, you get booted out periodically so that the&lt;br /&gt;
system can spend time paying attention to the network, to do animations, or&lt;br /&gt;
to let other applications run.  It spends a millisecond here, a millisecond there,&lt;br /&gt;
etc.  Instead of actually running simultaneously, they&#039;re periodically running, but &lt;br /&gt;
they seem to run simultaneously to the end-user.&lt;br /&gt;
&lt;br /&gt;
Sometimes you have 2 or more CPUs, but you have more than 2 things going on... &lt;br /&gt;
&lt;br /&gt;
=== Processes and the Kernel ===&lt;br /&gt;
&lt;br /&gt;
Processes are fundamentally the things that get multitasked and protected. It&lt;br /&gt;
is the abstraction of a running program. This is what makes an operating&lt;br /&gt;
system modern. In the old days, you had one memory space and the OS and its&lt;br /&gt;
applications were all sharing the CPU and memory. Now, with a process model,&lt;br /&gt;
there are barriers all over the place, and more importantly, something/someone&lt;br /&gt;
in charge governing the process. It&#039;s not a free-for all, it has a dictator,&lt;br /&gt;
and its name is the kernel.&lt;br /&gt;
&lt;br /&gt;
Kernel as in the center piece.&lt;br /&gt;
&lt;br /&gt;
Question to class: How many people have heard of the term Microkernel? &lt;br /&gt;
Not many hands.&lt;br /&gt;
&lt;br /&gt;
There are various terms that modify the term kernel such as monolithic kernel, microkernel,&lt;br /&gt;
picokernel, etc. These specify how much stuff is in the microkernel.&lt;br /&gt;
The idea is that the more code is in the kernel, the faster it goes, but&lt;br /&gt;
conversely, that the more code there is the risk of crashing is higher.&lt;br /&gt;
&lt;br /&gt;
All of the problem code goes into processes, as they can be restarted, and kept&lt;br /&gt;
out of the kernel.&lt;br /&gt;
&lt;br /&gt;
The debate about what is faster is not fully settled for technical and philosophical reasons.&lt;br /&gt;
Almost all operating systems on the list above are big kernels, not small ones.&lt;br /&gt;
&lt;br /&gt;
So if that&#039;s what a kernel is, how does a program fit into that?&lt;br /&gt;
If there&#039;s one program to rule them all, where do processes fit in?&lt;br /&gt;
The kernel decides who gets to run, that implements a priority scheme.&lt;br /&gt;
&lt;br /&gt;
Student:  &amp;quot;It got there first.  You start the computer, then the kernel gets in.  Everything has to&lt;br /&gt;
talk to it or it doesn&#039;t run...&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It gets to set the rules... that&#039;s sort of it...  &lt;br /&gt;
In unix, there&#039;s the idea of the init process.  It is first to run, and has&lt;br /&gt;
special responsibilities.  It is run using a regular binary, at system boot,&lt;br /&gt;
by the kernel.  This still doesn&#039;t tell us how the kernel keeps control of it.&lt;br /&gt;
&lt;br /&gt;
The kernel often keeps control by getting the hardware to help.  By loading first,&lt;br /&gt;
the kernel can setup the CPU and memory so that it has control. This&lt;br /&gt;
type of hardware assistance is generally available to the first code to&lt;br /&gt;
request it.&lt;br /&gt;
&lt;br /&gt;
=== Interrupts ===&lt;br /&gt;
&lt;br /&gt;
Interrupts -- what are they?  It&#039;s an alert to say something has to be done now.&lt;br /&gt;
&lt;br /&gt;
A CPU is running the programs, until something happens, like someone pressing&lt;br /&gt;
a key or a network packet arrives.  So an I/O device flags an interrupt.  The CPU&lt;br /&gt;
now has to stop and pay attention&lt;br /&gt;
&lt;br /&gt;
An interrupt is just a mechanism to allow the CPU to change contexts, to switch&lt;br /&gt;
from running one bit of code to another.  There&#039;s a standard set of interrupts&lt;br /&gt;
defined by the hardware.  Associated with each interrupt there&#039;s a bit of code.&lt;br /&gt;
When one interrupts happens, run its code, when another happens, run another.  &lt;br /&gt;
For example, for the keyboard, there&#039;s a routine to read a key from the keyboard, store it in&lt;br /&gt;
a buffer so its not overwritten when the next key is pressed, then returns.&lt;br /&gt;
&lt;br /&gt;
Think of an interrupt as a little kid pulling at your pant leg.  It wants your attention now.&lt;br /&gt;
&lt;br /&gt;
The OS controls interrupts to control the CPU (and also what happens with RAM).&lt;br /&gt;
&lt;br /&gt;
Wait a second?  If the kernel can only control interrupts, how can it keep&lt;br /&gt;
general control if no interrupts happen?  The clock IO device!  It throws&lt;br /&gt;
interrupts too.&lt;br /&gt;
&lt;br /&gt;
As a part of the boot sequence, the kernel programs the clock to wake the&lt;br /&gt;
operating system up every, say, 100th of a second.  Call me!  So the OS can&lt;br /&gt;
then keep running and perform its tasks as it needs:&lt;br /&gt;
&amp;quot;Is everyone behaving nicely?  Do I need to kill anyone?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[[Image:Stored_program_architecture_2.png]]&lt;br /&gt;
&lt;br /&gt;
=== Virtual Memory ===&lt;br /&gt;
&lt;br /&gt;
A slight fly in the ointment: If you are a program, and want to take control,&lt;br /&gt;
how do you mount a rebellion?  You overwrite the interrupt table! This is&lt;br /&gt;
where protected memory comes in.  It prevents a regular program from doing this.&lt;br /&gt;
As a regular process, you often can&#039;t even see the interrupt table.&lt;br /&gt;
&lt;br /&gt;
How is that possible? Many schemes have been proposed for doing protected&lt;br /&gt;
memory.  Some variants will be spoken about, but the most widespread method &lt;br /&gt;
is something known as virtual memory.  Often tied into the concept of&lt;br /&gt;
virtual memory is the ability to use disk for memory too.&lt;br /&gt;
&lt;br /&gt;
The fundamental idea is that the address you think your instruction or variable is at in memory is&lt;br /&gt;
fictional/virtual.  Say you want to load from address 2000 and load it into a register, and you&lt;br /&gt;
have another program that want to do the same thing, are they doing the same thing?&lt;br /&gt;
Nope!  They have nothing to do with each other in a virtual memory model.&lt;br /&gt;
Both programs live in their own virtual worlds, and can&#039;t see each other.&lt;br /&gt;
The kernel, with the help of a little piece of hardware called the MMU &lt;br /&gt;
is able to give each process its own virtual view of memory.  It decides&lt;br /&gt;
how that&#039;s going to map to real memory as it sees it.&lt;br /&gt;
&lt;br /&gt;
So the kernel controls interrupts to control IO and it controls memory.&lt;br /&gt;
These are the two key controls.  If a kernel can&#039;t control these, it&lt;br /&gt;
can&#039;t properly provide protections (&amp;quot;It can&#039;t stop the rebellion&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Last class we talked about hypervisors.  The whole idea there is that the&lt;br /&gt;
kernel thinks that it has control of the interrupt table and the real&lt;br /&gt;
MMU are actually virtual ones, provided by the Hypervisor.&lt;br /&gt;
&lt;br /&gt;
So you can now run windows inside a window on Linux, OSX, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The difference between the various versions of windows:&lt;br /&gt;
&lt;br /&gt;
- Windows 95, Windows 98 implemented these ideas for some programs, but not all, and&lt;br /&gt;
could get around it easily.&lt;br /&gt;
- Windows 3.1 didn&#039;t have these.&lt;br /&gt;
- Windows XP, Vista are modern.&lt;br /&gt;
&lt;br /&gt;
There&#039;s one small problem with Vista and XP, however.  &lt;br /&gt;
This has to do with the nature of the processes:&lt;br /&gt;
&lt;br /&gt;
[[Image:Processes_and_kernel.png‎]]&lt;br /&gt;
&lt;br /&gt;
To upgrade things, the kernel trusts some programs/users to allow them to&lt;br /&gt;
upgrade.  In windows, you tend to run as the user admin.  This means you&#039;re&lt;br /&gt;
running the equivalent of the unix root command.  The kernel listens to you&lt;br /&gt;
and does just about anything you want, including install programs. &lt;br /&gt;
&lt;br /&gt;
Say that cute Christmas animation.. which happens to install a keylogger&lt;br /&gt;
to send all your keystrokes to the other side of the world, so they can&lt;br /&gt;
log into your bank account.&lt;br /&gt;
&lt;br /&gt;
In unix, there&#039;s the concept of root users and non-root users.  Root can&lt;br /&gt;
ask for almost anything to be done, including change its code.  If you can&lt;br /&gt;
tell the kernel to load new code, you can pretty much do anything. As&lt;br /&gt;
an unpriviledged user, the kernel/OS say no.&lt;br /&gt;
&lt;br /&gt;
When people make fun of windows being insecure, its not a fundamental flaw&lt;br /&gt;
with the design of windows -- its a little broken, overly complex in some ways --&lt;br /&gt;
but certain design choices along the way in the name of usability, such as&lt;br /&gt;
running as admin users so that users don&#039;t need to be asked to do something&lt;br /&gt;
special to change settings, install software, upgrade, etc.&lt;br /&gt;
This is why we have the current spyware problem.&lt;br /&gt;
&lt;br /&gt;
Vista changes this slightly with the UAC (User Access Control) which runs&lt;br /&gt;
a regular user with full priviledges, but asks you whenever priviledged&lt;br /&gt;
operations need doing -- Yes/No.  And you just click on it.  But users&lt;br /&gt;
still click yes. &lt;br /&gt;
&lt;br /&gt;
And now there&#039;s easy ways to turn off UAC completely.  We&#039;ll talk about this more&lt;br /&gt;
later when we talk about security.&lt;br /&gt;
&lt;br /&gt;
=== System Calls ===&lt;br /&gt;
&lt;br /&gt;
How do you talk to a kernel?&lt;br /&gt;
&lt;br /&gt;
It&#039;s the dictator and you&#039;re a supplicant.  How do you make a timely request&lt;br /&gt;
to the kernel to ask it to please do something?  System calls!&lt;br /&gt;
&lt;br /&gt;
A system call is a standard mechanism for an application to talk to a kernel.&lt;br /&gt;
&lt;br /&gt;
A system call is NOT a function call.  In your APIs and the like, it may look like&lt;br /&gt;
a function call, may be wrapped in one.. But in implementation, they are very&lt;br /&gt;
different.&lt;br /&gt;
&lt;br /&gt;
In order for the kernel to be in control, it has to run with special&lt;br /&gt;
privileges and not give these to the user programs. There are various schemes,&lt;br /&gt;
but the common one is a 1-bit option: User mode, or supervisor mode. User mode&lt;br /&gt;
means that running as a regular program, you can&#039;t talk to the IO/interrupt&lt;br /&gt;
vectors or talk tot he MMU, but you can run instructions and access your own&lt;br /&gt;
memory. When you switch to supervisor mode, then everything is accessible. The&lt;br /&gt;
kernel runs in supervisor mode.&lt;br /&gt;
&lt;br /&gt;
So if you&#039;re cut off and can&#039;t see the kernel, how do you send it a message?&lt;br /&gt;
You might be able to write to a special place in memory that the kernel might&lt;br /&gt;
check periodically, but how do you get it to check now?  Normally the kernel&lt;br /&gt;
is invoked by interrupts.. So as a user program, to invoke the kernel,&lt;br /&gt;
you call an interrupt.  There are special instructions, software interrupts,&lt;br /&gt;
that are like a hardware interrupt, but software initiates them.  There are&lt;br /&gt;
interrupt tables just like for hardware.&lt;br /&gt;
&lt;br /&gt;
So the kernel can then look at the memory of the invoking user program when a&lt;br /&gt;
user program calls the system call. Remember, because of the memory&lt;br /&gt;
protections, you can&#039;t just jump into kernel code, so the only way in is via an&lt;br /&gt;
interrupt.&lt;br /&gt;
&lt;br /&gt;
Therefore, system calls cause interrupts to invoke the kernel.&lt;br /&gt;
&lt;br /&gt;
In the process of doing a system call, the system has to do a lot of &#039;paperwork&#039; &lt;br /&gt;
to change context.  System calls are expensive, very expensive.  This is&lt;br /&gt;
one of the things that tends to bound the performance of an operating system.&lt;br /&gt;
&lt;br /&gt;
Modern CPUs are so fast, shouldn&#039;t they be able to switch really fast?&lt;br /&gt;
Turns out the tricks used to make modern CPUs really fast are like those&lt;br /&gt;
used to make muscle cars -- they tend to go really fast in a straight line,&lt;br /&gt;
but when you want to turn, you have to slow down to nearly a stop.  Modern&lt;br /&gt;
CPUs are like that.&lt;br /&gt;
&lt;br /&gt;
Interrupts cause all partial work done in parallel by modern CPUs to be&lt;br /&gt;
thrown out, such as 10-20 or more instructions. The CPU has to fill the&lt;br /&gt;
pipelines and resume. This stuff happens at a level below that of what the&lt;br /&gt;
kernel runs at.  The kernel saves its registers before switching context,&lt;br /&gt;
so that it can resume later.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:Processes_and_kernel.png&amp;diff=1436</id>
		<title>File:Processes and kernel.png</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:Processes_and_kernel.png&amp;diff=1436"/>
		<updated>2007-09-16T18:06:20Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: Processes and Kernel&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Processes and Kernel&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:Stored_program_architecture_2.png&amp;diff=1435</id>
		<title>File:Stored program architecture 2.png</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:Stored_program_architecture_2.png&amp;diff=1435"/>
		<updated>2007-09-16T18:00:03Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: Stored Program Architecture diagram&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Stored Program Architecture diagram&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Introduction&amp;diff=1434</id>
		<title>Introduction</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Introduction&amp;diff=1434"/>
		<updated>2007-09-16T17:59:36Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;(Posting of notes still in progress)&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Last class, we began talking about turning the machine we have into the machine we want.&lt;br /&gt;
&lt;br /&gt;
What are some properties of the machine that we want?&lt;br /&gt;
&lt;br /&gt;
- usable / accessible&lt;br /&gt;
- stable / reliable&lt;br /&gt;
- functional (access to underlying resources)&lt;br /&gt;
- efficient&lt;br /&gt;
- customizable&lt;br /&gt;
- secure&lt;br /&gt;
- multitasking&lt;br /&gt;
- portability&lt;br /&gt;
&lt;br /&gt;
If you have a computer that doesn&#039;t let you access the hardware, eg. a&lt;br /&gt;
keyboard or video camera, it isn&#039;t very functional. Multitasking is slightly different than efficiency. &lt;br /&gt;
For portability, do you really want to have to rewrite applications to support&lt;br /&gt;
slight variations in the hardware such as different size hard disks and different&lt;br /&gt;
amounts of RAM?&lt;br /&gt;
&lt;br /&gt;
Operating systems don&#039;t do all of these perfectly, but they tend to do a lot&lt;br /&gt;
of these at least acceptably.&lt;br /&gt;
&lt;br /&gt;
If you look at the introduction to the text book, it talks about various&lt;br /&gt;
types of operatings systems.  Some of the operating systems we know about are:&lt;br /&gt;
Linux, Windows, MacOSX, VXWorks, QNX, MS-DOS, Soliaris/xBSD, OS/2, BeOS, VMS,&lt;br /&gt;
MVS, OS/370, AIX, etc.&lt;br /&gt;
&lt;br /&gt;
Linux isn&#039;t a variety of different operating systems to the same degree as the &lt;br /&gt;
different versions of windows, as most linuxes share some components, whereas windows tends not to.&lt;br /&gt;
&lt;br /&gt;
Of the list above, most of them are modern operating systems, except MS-DOS.&lt;br /&gt;
To be a &amp;quot;modern&amp;quot; OS, there are two major qualities:  Does it have protected&lt;br /&gt;
memory, and does it have pre-emptive multitasking?&lt;br /&gt;
&lt;br /&gt;
=== Protected Memory ===&lt;br /&gt;
&lt;br /&gt;
What is protected memory?  &lt;br /&gt;
&lt;br /&gt;
Student: A situation where each program and the operating system has its own memory, and the OS prevents&lt;br /&gt;
other programs from writing to another program&#039;s memory.  &lt;br /&gt;
&lt;br /&gt;
Dr. Somayaji:  Access mechanisms to avoid having one program overwrite another program&#039;s memory.&lt;br /&gt;
&lt;br /&gt;
This lets you have a situation where if one program crashes, you can just restart it.  Damage due to&lt;br /&gt;
memory overwrites is limited to one program.&lt;br /&gt;
&lt;br /&gt;
=== Preemptive Multi-tasking ===&lt;br /&gt;
&lt;br /&gt;
A way to have more than one program run at a time. Older machines were known&lt;br /&gt;
as batch machines and their operating systems were batch operating systems.&lt;br /&gt;
These ran tasks that took a long time to run. These were queued up, and run&lt;br /&gt;
one at a time in sequence.  These were typically things such things as payroll and  accounts receivable.&lt;br /&gt;
Usually these would be run overnight and the output, either on magnetic tape or a&lt;br /&gt;
stack of printouts which would be returned to the user in the morning.&lt;br /&gt;
&lt;br /&gt;
Preemptive multitasking - the OS enforces time sharing.&lt;br /&gt;
Co-operative multitasking  - each program lets others run.&lt;br /&gt;
&lt;br /&gt;
If you look at MS-DOS, there are batch files.  These are just a sequence of&lt;br /&gt;
commands to run.  It runs them and then returns when done.  &lt;br /&gt;
&lt;br /&gt;
If you want to run a GUI, however, a batch system is unlikely to be what you want, as&lt;br /&gt;
a GUI environment tends to be interactive. &lt;br /&gt;
&lt;br /&gt;
With the big iron in the old days they had big computers that would be sitting&lt;br /&gt;
mostly idle, except when running the batch jobs.  The idea of time sharing came along around then.&lt;br /&gt;
&lt;br /&gt;
=== Structure of a computer ===&lt;br /&gt;
&lt;br /&gt;
[[Image:Stored_program_architecture_1.png]]&lt;br /&gt;
Stored Program architecture&lt;br /&gt;
&lt;br /&gt;
The stored program architecture with today&#039;s computers is a bit of a fiction.&lt;br /&gt;
&lt;br /&gt;
The things the microprocessor does are significantly faster than the RAM&lt;br /&gt;
storage.  Modern computers have to wait for data from RAM. However this time is&lt;br /&gt;
dwarfed by the time spent waiting for I/O.  This is because I/O devices tend to be mechanical:&lt;br /&gt;
printers, hard disks, people at keyboards.&lt;br /&gt;
&lt;br /&gt;
This helped cause the idea &amp;quot;what if we had multiple users and let them share the CPU&amp;quot; to come&lt;br /&gt;
about. This is time-sharing. On modern computers, we do this too, but instead&lt;br /&gt;
of sharing with multiple users we run multiple programs for a single user --&lt;br /&gt;
multi-tasking.&lt;br /&gt;
&lt;br /&gt;
In older systems such as Win3.1 and MacOS 9, this was co-operative multi-tasking.&lt;br /&gt;
When things started running, they&#039;d hog the CPU until they decided they were&lt;br /&gt;
ready to give up the CPU.  &lt;br /&gt;
&lt;br /&gt;
There used to be a great feature in the Mac in the old days where if you held&lt;br /&gt;
down the mouse button, no networking would happen.  This was because the&lt;br /&gt;
program running at the time was hogging the CPU when the mouse button was pressed.&lt;br /&gt;
In pre-emptive multitasking, you get booted out periodically so that the&lt;br /&gt;
system can spend time paying attention to the network, to do animations, or&lt;br /&gt;
to let other applications run.  It spends a millisecond here, a millisecond there,&lt;br /&gt;
etc.  Instead of actually running simultaneously, they&#039;re periodically running, but &lt;br /&gt;
they seem to run simultaneously to the end-user.&lt;br /&gt;
&lt;br /&gt;
Sometimes you have 2 or more CPUs, but you have more than 2 things going on... &lt;br /&gt;
&lt;br /&gt;
=== Processes and the Kernel ===&lt;br /&gt;
&lt;br /&gt;
Processes are fundamentally the things that get multitasked and protected. It&lt;br /&gt;
is the abstraction of a running program. This is what makes an operating&lt;br /&gt;
system modern. In the old days, you had one memory space and the OS and its&lt;br /&gt;
applications were all sharing the CPU and memory. Now, with a process model,&lt;br /&gt;
there are barriers all over the place, and more importantly, something/someone&lt;br /&gt;
in charge governing the process. It&#039;s not a free-for all, it has a dictator,&lt;br /&gt;
and its name is the kernel.&lt;br /&gt;
&lt;br /&gt;
Kernel as in the center piece.&lt;br /&gt;
&lt;br /&gt;
Question to class: How many people have heard of the term Microkernel? &lt;br /&gt;
Not many hands.&lt;br /&gt;
&lt;br /&gt;
There are various terms that modify the term kernel such as monolithic kernel, microkernel,&lt;br /&gt;
picokernel, etc. These specify how much stuff is in the microkernel.&lt;br /&gt;
The idea is that the more code is in the kernel, the faster it goes, but&lt;br /&gt;
conversely, that the more code there is the risk of crashing is higher.&lt;br /&gt;
&lt;br /&gt;
All of the problem code goes into processes, as they can be restarted, and kept&lt;br /&gt;
out of the kernel.&lt;br /&gt;
&lt;br /&gt;
The debate about what is faster is not fully settled for technical and philosophical reasons.&lt;br /&gt;
Almost all operating systems on the list above are big kernels, not small ones.&lt;br /&gt;
&lt;br /&gt;
So if that&#039;s what a kernel is, how does a program fit into that?&lt;br /&gt;
If there&#039;s one program to rule them all, where do processes fit in?&lt;br /&gt;
The kernel decides who gets to run, that implements a priority scheme.&lt;br /&gt;
&lt;br /&gt;
Student:  &amp;quot;It got there first.  You start the computer, then the kernel gets in.  Everything has to&lt;br /&gt;
talk to it or it doesn&#039;t run...&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It gets to set the rules... that&#039;s sort of it...  &lt;br /&gt;
In unix, there&#039;s the idea of the init process.  It is first to run, and has&lt;br /&gt;
special responsibilities.  It is run using a regular binary, at system boot,&lt;br /&gt;
by the kernel.  This still doesn&#039;t tell us how the kernel keeps control of it.&lt;br /&gt;
&lt;br /&gt;
The kernel often keeps control by getting the hardware to help.  By loading first,&lt;br /&gt;
the kernel can setup the CPU and memory so that it has control. This&lt;br /&gt;
type of hardware assistance is generally available to the first code to&lt;br /&gt;
request it.&lt;br /&gt;
&lt;br /&gt;
=== Interrupts ===&lt;br /&gt;
&lt;br /&gt;
Interrupts -- what are they?  It&#039;s an alert to say something has to be done now.&lt;br /&gt;
&lt;br /&gt;
A CPU is running the programs, until something happens, like someone pressing&lt;br /&gt;
a key or a network packet arrives.  So an I/O device flags an interrupt.  The CPU&lt;br /&gt;
now has to stop and pay attention&lt;br /&gt;
&lt;br /&gt;
An interrupt is just a mechanism to allow the CPU to change contexts, to switch&lt;br /&gt;
from running one bit of code to another.  There&#039;s a standard set of interrupts&lt;br /&gt;
defined by the hardware.  Associated with each interrupt there&#039;s a bit of code.&lt;br /&gt;
When one interrupts happens, run its code, when another happens, run another.  &lt;br /&gt;
For example, for the keyboard, there&#039;s a routine to read a key from the keyboard, store it in&lt;br /&gt;
a buffer so its not overwritten when the next key is pressed, then returns.&lt;br /&gt;
&lt;br /&gt;
Think of an interrupt as a little kid pulling at your pant leg.  It wants your attention now.&lt;br /&gt;
&lt;br /&gt;
The OS controls interrupts to control the CPU (and also what happens with RAM).&lt;br /&gt;
&lt;br /&gt;
Wait a second?  If the kernel can only control interrupts, how can it keep&lt;br /&gt;
general control if no interrupts happen?  The clock IO device!  It throws&lt;br /&gt;
interrupts too.&lt;br /&gt;
&lt;br /&gt;
As a part of the boot sequence, the kernel programs the clock to wake the&lt;br /&gt;
operating system up every, say, 100th of a second.  Call me!  So the OS can&lt;br /&gt;
then keep running and perform its tasks as it needs:&lt;br /&gt;
&amp;quot;Is everyone behaving nicely?  Do I need to kill anyone?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[[Image:Stored_program_architecture_2.png]]&lt;br /&gt;
&lt;br /&gt;
=== Virtual Memory ===&lt;br /&gt;
&lt;br /&gt;
A slight fly in the ointment: If you are a program, and want to take control,&lt;br /&gt;
how do you mount a rebellion?  You overwrite the interrupt table! This is&lt;br /&gt;
where protected memory comes in.  It prevents a regular program from doing this.&lt;br /&gt;
As a regular process, you often can&#039;t even see the interrupt table.&lt;br /&gt;
&lt;br /&gt;
How is that possible? Many schemes have been proposed for doing protected&lt;br /&gt;
memory.  Some variants will be spoken about, but the most widespread method &lt;br /&gt;
is something known as virtual memory.  Often tied into the concept of&lt;br /&gt;
virtual memory is the ability to use disk for memory too.&lt;br /&gt;
&lt;br /&gt;
The fundamental idea is that the address you think your instruction or variable is at in memory is&lt;br /&gt;
fictional/virtual.  Say you want to load from address 2000 and load it into a register, and you&lt;br /&gt;
have another program that want to do the same thing, are they doing the same thing?&lt;br /&gt;
Nope!  They have nothing to do with each other in a virtual memory model.&lt;br /&gt;
Both programs live in their own virtual worlds, and can&#039;t see each other.&lt;br /&gt;
The kernel, with the help of a little piece of hardware called the MMU &lt;br /&gt;
is able to give each process its own virtual view of memory.  It decides&lt;br /&gt;
how that&#039;s going to map to real memory as it sees it.&lt;br /&gt;
&lt;br /&gt;
So the kernel controls interrupts to control IO and it controls memory.&lt;br /&gt;
These are the two key controls.  If a kernel can&#039;t control these, it&lt;br /&gt;
can&#039;t properly provide protections (&amp;quot;It can&#039;t stop the rebellion&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
Last class we talked about hypervisors.  The whole idea there is that the&lt;br /&gt;
kernel thinks that it has control of the interrupt table and the real&lt;br /&gt;
MMU are actually virtual ones, provided by the Hypervisor.&lt;br /&gt;
&lt;br /&gt;
So you can now run windows inside a window on Linux, OSX, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The difference between the various versions of windows:&lt;br /&gt;
&lt;br /&gt;
- Windows 95, Windows 98 implemented these ideas for some programs, but not all, and&lt;br /&gt;
could get around it easily.&lt;br /&gt;
- Windows 3.1 didn&#039;t have these.&lt;br /&gt;
- Windows XP, Vista are modern.&lt;br /&gt;
&lt;br /&gt;
There&#039;s one small problem with Vista and XP, however.  &lt;br /&gt;
This has to do with the nature of the processes:&lt;br /&gt;
&lt;br /&gt;
To upgrade things, the kernel trusts some programs/users to allow them to&lt;br /&gt;
upgrade.  In windows, you tend to run as the user admin.  This means you&#039;re&lt;br /&gt;
running the equivalent of the unix root command.  The kernel listens to you&lt;br /&gt;
and does just about anything you want, including install programs. &lt;br /&gt;
&lt;br /&gt;
Say that cute Christmas animation.. which happens to install a keylogger&lt;br /&gt;
to send all your keystrokes to the other side of the world, so they can&lt;br /&gt;
log into your bank account.&lt;br /&gt;
&lt;br /&gt;
In unix, there&#039;s the concept of root users and non-root users.  Root can&lt;br /&gt;
ask for almost anything to be done, including change its code.  If you can&lt;br /&gt;
tell the kernel to load new code, you can pretty much do anything. As&lt;br /&gt;
an unpriviledged user, the kernel/OS say no.&lt;br /&gt;
&lt;br /&gt;
When people make fun of windows being insecure, its not a fundamental flaw&lt;br /&gt;
with the design of windows -- its a little broken, overly complex in some ways --&lt;br /&gt;
but certain design choices along the way in the name of usability, such as&lt;br /&gt;
running as admin users so that users don&#039;t need to be asked to do something&lt;br /&gt;
special to change settings, install software, upgrade, etc.&lt;br /&gt;
This is why we have the current spyware problem.&lt;br /&gt;
&lt;br /&gt;
Vista changes this slightly with the UAC (User Access Control) which runs&lt;br /&gt;
a regular user with full priviledges, but asks you whenever priviledged&lt;br /&gt;
operations need doing -- Yes/No.  And you just click on it.  But users&lt;br /&gt;
still click yes. &lt;br /&gt;
&lt;br /&gt;
And now there&#039;s easy ways to turn off UAC completely.  We&#039;ll talk about this more&lt;br /&gt;
later when we talk about security.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:Stored_program_architecture_1.png&amp;diff=1433</id>
		<title>File:Stored program architecture 1.png</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:Stored_program_architecture_1.png&amp;diff=1433"/>
		<updated>2007-09-16T17:44:53Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: Stored Program Architecture diagram&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Stored Program Architecture diagram&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Class_Outline&amp;diff=1432</id>
		<title>Class Outline</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Class_Outline&amp;diff=1432"/>
		<updated>2007-09-16T17:06:25Z</updated>

		<summary type="html">&lt;p&gt;Rhooper: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Lecture Notes: September 10th, 2007 ==&lt;br /&gt;
&lt;br /&gt;
This lecture covered Administrative items and a part of the Introduction.&lt;br /&gt;
&lt;br /&gt;
=== Administrative ===&lt;br /&gt;
&lt;br /&gt;
==== Textbook ====&lt;br /&gt;
&lt;br /&gt;
Dr Somayaji has used the 3rd edition textbook before.  It isn&#039;t ideal, but it will suffice.  If you wish to a different textbook, you&#039;re expected to ensure that it covers the topics as covered in class.&lt;br /&gt;
&lt;br /&gt;
==== Exams, Tests, Quizzes, Labs, Assignments ====&lt;br /&gt;
&lt;br /&gt;
There will be no quizzes and no final exam.  There will be 4 labs, 2 tests, and a term paper.  We may collaborate on the labs, but not on the quizzes and term paper.&lt;br /&gt;
&lt;br /&gt;
½ to ⅔ of a test is designed to be easy, with the remainder being more difficult.&lt;br /&gt;
&lt;br /&gt;
==== Term Paper ====&lt;br /&gt;
&lt;br /&gt;
Choice of topic for term paper is fairly flexible. Take what you know in other areas of computer&lt;br /&gt;
science and integrate them in the paper. Include your own interests.  The most common mistake seen&lt;br /&gt;
is to take an idea for the exact paper, and then do research to fulfill the paper.  Do research first, then come up&lt;br /&gt;
with a topic, as the thing you want to write on may not have been written about yet.&lt;br /&gt;
&lt;br /&gt;
For bibliographies, use the form found in most papers and journals.  Bibtex is great for bibliographies.&lt;br /&gt;
&lt;br /&gt;
Google scholar is a great place to start looking for papers.  Many journals are only accessible by subscription, however the [http://library.carleton.ca Carleton University Library] provides access to students both onsite and off.  You will need your student ID in order to authenticate from off-site.  While on-site, you should be able to access the journals without authentication.&lt;br /&gt;
&lt;br /&gt;
A useful trick is to take the URL of a journal article and append &#039;&#039;proxy.library.com&#039;&#039; to the domain name portion of the URL.&lt;br /&gt;
&lt;br /&gt;
==== Extra Credit ====&lt;br /&gt;
&lt;br /&gt;
Dr Somayaji would like to build a class notes wiki.  You can get up to 3 percent added to your final mark if you participate and do an outstanding job.  Lower quality jobs will result in less extra credit.  You will be able to volunteer once during term.  &lt;br /&gt;
&lt;br /&gt;
==== Labs ====&lt;br /&gt;
&lt;br /&gt;
You are expected to attend the first lab for each lab assignment, as there&lt;br /&gt;
will be marks given for work done in class. This is sort of a way to take&lt;br /&gt;
attendance, as well as to help students get started with assignments. This&lt;br /&gt;
means there&#039;s 4 mandatory tutorials. Students are encouraged to attend all of&lt;br /&gt;
the labs. Labs will take longer than one assignment. Don’t be discouraged by&lt;br /&gt;
students who manage to finish labs in one hour, as they may have experience&lt;br /&gt;
with operating systems already, or may have taken the course before.&lt;br /&gt;
&lt;br /&gt;
==== Term Papers ====&lt;br /&gt;
&lt;br /&gt;
You may talk to each other about papers to get feedback, but each paper is&lt;br /&gt;
supposed to be your own work.&lt;br /&gt;
&lt;br /&gt;
Plagiarism is a big deal, and some students of COMP3000 have been caught plagarizing in&lt;br /&gt;
the past. &#039;&#039;&#039;Do not plagiarize.&#039;&#039;&#039; The official policy requires handing over the&lt;br /&gt;
plagiarized paper and student to the dean for discipline. &lt;br /&gt;
&lt;br /&gt;
You can&#039;t just cut and paste. If a section in&lt;br /&gt;
another person&#039;s work is fundamentally the same as yours, this is still&lt;br /&gt;
plagiarism. The paper is supposed to be your own work. Figures cannot just be cut&lt;br /&gt;
and pasted. You must redraw it yourself, then cite it. Using a thesaurus to&lt;br /&gt;
rephrase every word will not work.  This is still fundamentally someone else&#039;s work.&lt;br /&gt;
&lt;br /&gt;
This issue carries over into the commercial world, even in code: Free code may&lt;br /&gt;
not be free -- read the licenses. Incorporating some 3rd party code may&lt;br /&gt;
require your company to divulge all of their code into the wild.&lt;br /&gt;
&lt;br /&gt;
Watch out for incorporating something but only tweaking it.  &lt;br /&gt;
&lt;br /&gt;
You should read your sources, put them away, and then pull them back out to&lt;br /&gt;
put the citations in...  This should help avoid plagiarizing, unless you have a&lt;br /&gt;
photographic memory.&lt;br /&gt;
&lt;br /&gt;
==== Lectures ====&lt;br /&gt;
&lt;br /&gt;
Question from the floor:  How do you structure the lectures?&lt;br /&gt;
&lt;br /&gt;
Dr Somayaji tries to keep the tests to the material in the text and tests.&lt;br /&gt;
The lectures will vary, and he considers them to be a performance.  Skipping&lt;br /&gt;
class will result in some interesting material getting missed, but you should&lt;br /&gt;
do acceptably by reading the text and doing the assignments.  The goal of the&lt;br /&gt;
lectures, however, is to make it worth your time to attend.&lt;br /&gt;
&lt;br /&gt;
One of the things that universities are struggling with is, especially with the&lt;br /&gt;
compute science field, is why are we still requiring students to attend in person?  Part&lt;br /&gt;
of this is that lectures and other interactions with professors an TAs encourage&lt;br /&gt;
a different learning due to physical presence.  There is a different type of energy, different&lt;br /&gt;
type of synergy.  Being able to see reactions of students helps to fine tune the lecture.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Optional Reading ====&lt;br /&gt;
&lt;br /&gt;
Over thanksgiving, students are encouraged to read the [http://web.mac.com/nealstephenson/Neal_Stephensons_Site/Home.html Neal Stephenson]&#039;s article [http://www.cryptonomicon.com/beginning.html In the Beginning was the Command Line]. While&lt;br /&gt;
he&#039;s not entirely correct that UNIX is the one true computer science operating&lt;br /&gt;
system, it is almost true. UNIX wasn&#039;t built to fulfill a commercial niche. It was&lt;br /&gt;
built for programmers by programmers. It has survived because it has been&lt;br /&gt;
useful to programmers.&lt;br /&gt;
&lt;br /&gt;
One of the first things that tends to happen to most new Operating Systems or&lt;br /&gt;
platforms is that some part of unix or some variant of unix is ported to it (such as [http://www.netbsd.org/ NetBSD] or [http://www.linux.org/ Linux].&lt;br /&gt;
Additionally, things like cygwin will pop up to allow you to port applications&lt;br /&gt;
and programs over. [http://www.cygwin.com/ Cygwin] works fairly well, but not perfectly.  This is like&lt;br /&gt;
a handyman going to a new job.  He&#039;s going to bring some of his favorite tools&lt;br /&gt;
with him to use at the new job.  You bring along your favorite tools because&lt;br /&gt;
you&#039;re familiar with them and can do powerful things with them.  &lt;br /&gt;
&lt;br /&gt;
While we mostly hear about unix in class, we&#039;ll still hear about windows some.&lt;br /&gt;
&lt;br /&gt;
=== Introduction ===&lt;br /&gt;
&lt;br /&gt;
What is an operating system?&lt;br /&gt;
&lt;br /&gt;
Student: A large piece of software that interfaces with the machine&#039;s hardware&lt;br /&gt;
to allow a user to interact with the computer to perform actions.&lt;br /&gt;
&lt;br /&gt;
Dr Somayaji doesn&#039;t like the size qualifier.&lt;br /&gt;
&lt;br /&gt;
The operating system turns the hardware (the computer that you have) into the&lt;br /&gt;
computer that you want (the applications).&lt;br /&gt;
&lt;br /&gt;
{|align=&amp;quot;right&amp;quot;&lt;br /&gt;
|[[Image:Comp3000_lecture1_figure1.png]]&lt;br /&gt;
|}&lt;br /&gt;
Ideally an application should not depend on specific hardware, but rather&lt;br /&gt;
generic things - not a specific brand of mouse, but rather any mouse by&lt;br /&gt;
abstraction.&lt;br /&gt;
&lt;br /&gt;
This is an interesting time to talk about operating systems, as this model&lt;br /&gt;
of operating system is becoming more and more obsolete. Part of this is that&lt;br /&gt;
more and more big applications are becoming part of the operating system (eg&lt;br /&gt;
browsers). Even the protection and privilege level separation is becoming&lt;br /&gt;
blurred. For example, Internet Explorer&#039;s HTML rendering engine is an object that can be used&lt;br /&gt;
by many other applications, making it an operating system service. This has&lt;br /&gt;
resulted in lawsuits surrounding what an Operating System is. Dr. Somayaji tends to take the&lt;br /&gt;
larger view that rather than the more specific view that the layer that talks to&lt;br /&gt;
the hardware is the Operating System.&lt;br /&gt;
&lt;br /&gt;
What&#039;s newer, even if the idea is old, is an idea that has become really hot&lt;br /&gt;
in the last 5 years (but has been around for 10 years). Virtualization. VMWare&lt;br /&gt;
is now worth many billions of dollars. Xensource got bought by citrix.&lt;br /&gt;
&lt;br /&gt;
How many people like to install Windows?  (Nobody raised their hand).&lt;br /&gt;
&lt;br /&gt;
The reason that this is important is that its a huge pain to install operating&lt;br /&gt;
systems. So system administrators turn around and turn the OS into an&lt;br /&gt;
application, and just take copies of a new image and replace the OS image with&lt;br /&gt;
the new image to upgrade with instead.&lt;br /&gt;
&lt;br /&gt;
OS &amp;lt;math&amp;gt;\leftrightarrow&amp;lt;/math&amp;gt; Application interface is the system calls.&lt;br /&gt;
&lt;br /&gt;
{|align=&amp;quot;right&amp;quot;&lt;br /&gt;
|[[Image:Comp3000_lecture1_figure2.png]]&lt;br /&gt;
|}&lt;br /&gt;
OS &amp;lt;math&amp;gt;\leftrightarrow&amp;lt;/math&amp;gt; Hypervisor interface/provides virtual hardware to the OS (eg: here&#039;s an abstract&lt;br /&gt;
disk, keyboard, network). This abstraction is mapped as the hypervisor sees&lt;br /&gt;
fit onto hardware.&lt;br /&gt;
&lt;br /&gt;
Having software provide virtual hardware requires a few tricks, as some&lt;br /&gt;
instructions on a processor go straight to the hardware making virtualization&lt;br /&gt;
hard.  Fortunately, Intel and AMD have provided extensions to help with this&lt;br /&gt;
(VTx).&lt;br /&gt;
&lt;br /&gt;
The main point to keep in mind here is that operating systems have&lt;br /&gt;
traditionally provided imaginary or virtual things for applications to play&lt;br /&gt;
with. Such as &amp;quot;Oh, you want memory?&amp;quot;, I&#039;ll give you memory.. But what it gives&lt;br /&gt;
you may not correspond to memory at all. Say you asked for a megabyte of&lt;br /&gt;
memory, it may not actually give you it at that time, but only give it to you&lt;br /&gt;
when you use it.  The same goes with CPUs and the hypervisor.  It may not give&lt;br /&gt;
you a real CPU at that time.&lt;br /&gt;
&lt;br /&gt;
The other big trend going on that makes things interesting that is going has&lt;br /&gt;
to do with gaming systems.  Game manufacturers want to write their game for&lt;br /&gt;
all of the platforms...  The hardware, however, is remarkably different.  &lt;br /&gt;
Modern PCs are moving towards 2-4 cores plus graphics processors.  Managing&lt;br /&gt;
multiple CPUs is a bit tricky, but we know how to do it.  The hard part&lt;br /&gt;
comes in with the graphics processors and non-traditional processors, such as &lt;br /&gt;
the cell processor:  The cell processor is 9 cores, but each core is specialized&lt;br /&gt;
and not symmetric.  They&#039;re not all general purpose. &lt;br /&gt;
&lt;br /&gt;
{|align=&amp;quot;right&amp;quot;&lt;br /&gt;
|[[Image:Comp3000_lecture1_cell_processor.png]]&lt;br /&gt;
|}&lt;br /&gt;
A standard cell processor has 8 SPUs and a central core, but one can be bad.&lt;br /&gt;
The PS3 has 7 SPUs+Central CPU. It is a standard PPC (Power PC) core. The SPUs implement&lt;br /&gt;
SIMD. This takes one instruction and&lt;br /&gt;
applies it to multiple data items. Can do the op to an entire array at once.&lt;br /&gt;
Processing graphics and sound need to process big arrays.  This is what&lt;br /&gt;
graphics cards to to a large extent.  This means that they&#039;re not good for&lt;br /&gt;
general purpose code.  There&#039;s no decision making capability (branching).&lt;br /&gt;
&lt;br /&gt;
; SISD : Single Instruction Single Data.  Plain vanilla processors.&lt;br /&gt;
; SIMD : Single instruction multiple data&lt;br /&gt;
; MIMD : Multiple Instructions Multiple Data.&lt;br /&gt;
&lt;br /&gt;
How do you use these? How do you co-ordinate these processors? How do you&lt;br /&gt;
abstract this division between CPU and GPU.&lt;br /&gt;
&lt;br /&gt;
The XBox 360 is closer to a PC in that it has multiple CPUs and a GPU.&lt;br /&gt;
Developing for the PS3 versus the XBox is thus very different.&lt;br /&gt;
&lt;br /&gt;
Hardware is becoming more and more parallel, but more and more dedicated&lt;br /&gt;
purpose.&lt;/div&gt;</summary>
		<author><name>Rhooper</name></author>
	</entry>
</feed>