<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Wlawrenc</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Wlawrenc"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Wlawrenc"/>
	<updated>2026-04-22T14:08:13Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=5700</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=5700"/>
		<updated>2010-11-29T21:05:20Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: /* Who is working on what ? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group 3 Essay=&lt;br /&gt;
&lt;br /&gt;
Hello everyone, please post your contact information here:&lt;br /&gt;
&lt;br /&gt;
Ben Robson [mailto:brobson@connect.carleton.ca brobson@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Rey Arteaga: rarteaga@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Corey Faibish: [mailto:corey.faibish@gmail.com corey.faibish@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Tawfic Abdul-Fatah: [mailto:tfatah@gmail.com tfatah@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Fangchen Sun: [mailto:sfangche@connect.carleton.ca sfangche@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Mike Preston: [mailto:michaelapreston@gmail.com michaelapreston@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Wesley L. Lawrence: [mailto:wlawrenc@connect.carleton.ca wlawrenc@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Can&#039;t access the video without a login as we found out in class, but you can listen to the speech and follow with the slides pretty easily, I just went through it and it&#039;s not too bad. Rarteaga&lt;br /&gt;
&lt;br /&gt;
==Question 3 Group==&lt;br /&gt;
*Abdul-Fatah Tawfic tafatah&lt;br /&gt;
*Arteaga Reynaldo rarteaga&lt;br /&gt;
*Faibish Corey   cfaibish&lt;br /&gt;
*Lawrence Wesley wlawrenc&lt;br /&gt;
*Preston Mike    mpreston&lt;br /&gt;
*Robson  Benjamin brobson&lt;br /&gt;
*Sun     Fangchen sfangche&lt;br /&gt;
&lt;br /&gt;
==Who is working on what ?==&lt;br /&gt;
Just to keep track of who&#039;s doing what --[[User:Tafatah|Tafatah]] 01:37, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Hey everyone, I have taken the liberty of trying to provide a good first start on our paper. I have provided many resources and filled in information for all of the sections. This is not complete, but it should make the rest of the work a lot easier. Please go through and add in pieces that I am missing (specifically in the Critique section) and then we can put this essay to bed. Also, please note that below I have included my notes on the paper so that if anyone feels they do not have time to read the paper, they can read my notes instead and still find additional materials to contribute with.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:22, 20 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Man, Mike: you did a nice job! I&#039;m reading through it now very thorough :) Since you pretty much turned all of your bulleted points from the discussion page into that on the main page, what else needs to be done? Just expanding on each topic and sub-topic? Or are there untouched concepts/topics that we should be addressing?&lt;br /&gt;
Oh and question two: Should we turn the Q&amp;amp;A from the end of the video of the presentation into information for the &#039;&#039;Critique&#039;&#039; section?&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 20:34, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Mike, thnx for the great job! I basically finished the part of related work based on your draft.&lt;br /&gt;
--[[User:sfangchen|Fangchen Sun]] 17:40, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
No problem, And great additions. &lt;br /&gt;
In terms of what needs to be done, I do believe that adding some detail to the critique is where we really need some focus. Using the Q&amp;amp;A from the video is probably a great source of inspiration, maybe just take a look at the topics presented, see if additional material from other sources can be obtain and use those sources to address any pros or cons to this artical. Remember, the critique section can be agreeing or disagreeing with what is presented in the actual paper.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 15:12, 28 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I noticied we needed some work in the Critique section, so I listened to the Q&amp;amp;A session at the end of the FlexSC mp3 talk, and took some quick notes. There seems to be 3 good ones (of the 9) that I picked out. I&#039;ll summarize them and add to the Critique section, specifically questions 3, 6, and 7. If anyone else wants to have listen to a specific question, and maybe try to do some more &#039;critiquing&#039; here is a list of what time the questions each take place, and a very general statement on what the question is about, and the very general answer:&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;1 - 22:30 &amp;lt;br&amp;gt;Q: Did the paper consider Upstream patches(?) &amp;lt;br&amp;gt;A:No&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;2 - 23:00 &amp;lt;br&amp;gt;Q: Security issues with the pages &amp;lt;br&amp;gt;A:Pages pre-processor, no issue&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;3 - 24:10 &amp;lt;br&amp;gt;Q: What about blocking calls (read/write)? &amp;lt;br&amp;gt;A: Not handled&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;4 - 25:50 &amp;lt;br&amp;gt;Q: ? &amp;lt;br&amp;gt;A: Not a problem? (Personally didn&#039;t understand question, don&#039;t believe it&#039;s important, but anyone whose willing should double check)&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;5 - 28:00 &amp;lt;br&amp;gt;Q: Compare pollution between user thread switching to user-kernel thread switching? &amp;lt;br&amp;gt;A: No, only looked at and measured pollution when switching user-to-kernel.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;6 - 29:30 &amp;lt;br&amp;gt;Q: Scheduling problems of what cores are &#039;system&#039; core, and what cores are &#039;user&#039; cores &amp;lt;br&amp;gt;A: Very simple algorithm, but not tested when running multiple apps, would need to be &amp;quot;fine-tuned&amp;quot;&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;7 - 31:00 &amp;lt;br&amp;gt;Q: Situations where FlexSC is bad, when running less or equal threads to the number of cores, such as &amp;quot;Scientific programs&amp;quot;, mostly in userspace where one thread has 100% CPU resource &amp;lt;br&amp;gt;A: Agrees, FlexSC is not to be used for such situations&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;8 - 33:00 &amp;lt;br&amp;gt;Q: Problems with un-related threads demanding service, how does it scale? Issue with frequency of polling could cause sys calls to take time to preform &amp;lt;br&amp;gt;A: (Would be answered offline)&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;9 - 34:30 &amp;lt;br&amp;gt;Q: Backwards compatability and robustness &amp;lt;br&amp;gt;A: Only an issue with getTID (Thread ID), needed a small patch.&lt;br /&gt;
&lt;br /&gt;
--[[User:Wlawrence|Wesley Lawrence]] 20:31, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Wrote information in Critique for questions 3, 6 and 7 (Blocking Calls, Core Scheduling Issues, and When There Are Not More Threads Then Cores). If you feel any additions need to be made, please feel free to add them. Most importantly, I&#039;m not sure how to cite these. All information as obtained from the mp3 of the presentation, could some one let me know how to go about citing this?&lt;br /&gt;
&lt;br /&gt;
--[[User:Wlawrence|Wesley Lawrence]] 21:05, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Paper Summary==&lt;br /&gt;
I am not sure if everyone has taken the time to examine the paper closely, so I thought I would provide my notes on the paper so that anyone who has not read it could have a view of the high points.&lt;br /&gt;
&lt;br /&gt;
Abstract:&lt;br /&gt;
   - System calls are the accepted way to request services from the OS kernel, historical implementation.&lt;br /&gt;
   - System calls almost always synchronous &lt;br /&gt;
   - Aim to demonstrate how synchronous system calls negatively affect performance due mainly to pipeline flushing and pollution of key processor structures (TLB, data and instruction caches, etc.)&lt;br /&gt;
        o TLB is translation lookaside buffer which is uses pages (data and code pages) to speed up virtual translation speed.&lt;br /&gt;
   - Propose exception-less system calls to improve the current system call process.&lt;br /&gt;
        o Improve processor efficiency via enabling flexible scheduling of OS work which in turn reduces size of execution both in kernel and user space thus reducing pollution effects on processor structures.&lt;br /&gt;
   - Exception-less system calls especially effective on multi-core systems running multi-threaded applications.&lt;br /&gt;
   - FlexSC is an implementation of exception-less system calls in the Linux kernel with accompanying user-mode threads from FlexSC-Threads package.&lt;br /&gt;
        o Flex-SC-Threads convert legacy system calls into exception-less system calls.&lt;br /&gt;
Introduction:&lt;br /&gt;
   - Synchronous system calls have a negative impact on system performance due to:&lt;br /&gt;
        o Direct costs – mode switching&lt;br /&gt;
        o Indirect costs – pollution of important processor structures &lt;br /&gt;
   - Traditional system calls:&lt;br /&gt;
        o Involve writing arguments to appropriate registers as well as issuing a special machine instruction which raises a synchronous exception.&lt;br /&gt;
        o A processor exception is used to communicate with the kernel.&lt;br /&gt;
        o Synchronous execution is enforced as the application expects the completion of the system call before user-mode execution resumes.&lt;br /&gt;
   - Moore’s Law has provided large increases to performance potential of software while at the same time widening the gap between the performance of efficient and inefficient software.&lt;br /&gt;
        o This gap is mainly caused by disparity of accessing different processor resources (registers, caches, memory)&lt;br /&gt;
   - Server and system-intensive workloads are known to perform well below processor potential throughput.&lt;br /&gt;
        o These are the items the researchers are mostly interested in.&lt;br /&gt;
        o The cause is often described as due to the lack of locality.&lt;br /&gt;
        o The researchers state this lack of locality is in part a result of the current synchronous system calls.&lt;br /&gt;
   - When a synchronous system call, like pwrite, is called, the instruction per cycle level drops significantly and it takes many (in the example 14,000) cycles of execution for the instruction per cycle rate&lt;br /&gt;
 to return to the level it was at before the system (pwrite) call.&lt;br /&gt;
   - Exception-less System Call:&lt;br /&gt;
        o Request for kernel services that does not require the use of synchronous processor exceptions.&lt;br /&gt;
        o System calls are written to a reserved syscall page.&lt;br /&gt;
        o Execution of system calls is performed asynchronously by special kernel level syscall threads. The result of the execution is stored on the syscall page after execution.&lt;br /&gt;
   - By separating system call execution from system call invocation, the system can now have flexible system call scheduling.&lt;br /&gt;
        o This allows system calls to be executed in batches, increasing the temporal locality of execution.&lt;br /&gt;
        o Also provides a way to execute system calls on a separate core, in parallel to user-mode thread execution. This provides spatial per-core locality.&lt;br /&gt;
        o An additional side effect is that now a multi-core system can have individual cores designated to run either user-mode or kernel mode execution dynamically depending on the current system load.&lt;br /&gt;
   - In order to implement the exception-less system calls, the research team suggests adding a new M-on-N threading package.&lt;br /&gt;
        o M user-mode threads executing on N kernel-visible threads.&lt;br /&gt;
        o This would allow the threading package to harvest independent system calls, by switching threads, in user-mode, whenever a thread invokes a system call.&lt;br /&gt;
The (Real) Cost of System Calls&lt;br /&gt;
   - Traditional way to measure the performance cost of system calls is the mode switch time. This is the time necessary to execute the system call instruction in user-mode, resume execution in kernel mode and&lt;br /&gt;
 then return execution back to the user-mode.&lt;br /&gt;
   - Mode switch in modern processors is a processor exception.&lt;br /&gt;
        o Flush the user-mode pipeline, save registers onto the kernel stack, change the protection domain and redirect execution to the proper exception handler.&lt;br /&gt;
   - Another measure of the performance of a system call is the state pollution caused by the system call.&lt;br /&gt;
        o State pollution is the measure of how much user-mode data is overwritten in places like the TLB, cache (L1, L2, L3), branch prediction tables with kernel leel execution instructions for the system call. &lt;br /&gt;
        o This data must be re-populated upon the return to user-mode.&lt;br /&gt;
   - Potentially the most significant measure of cost of system calls is the performance impact on a running application.&lt;br /&gt;
        o Ideally, user-mode instructions per cycle should not decrease as a result of a system call.&lt;br /&gt;
        o Synchronous system calls do cause a drop in user-mode IPC  due to; direct overhead -  the processor exception associated with the system call which flushes the processor pipeline; and indirect overhead&lt;br /&gt;
 – system call pollution on processors structures.&lt;br /&gt;
Exception-less System calls:&lt;br /&gt;
   - System call batching&lt;br /&gt;
        o By delaying a series of system calls and executing them in batches you can minimize the frequency of mode switches between user and kernel mode.&lt;br /&gt;
        o Improves both the direct and indirect cost of system calls.&lt;br /&gt;
   - Core specialization&lt;br /&gt;
        o A system call can be scheduled on a different core then the core on which it was invoked, only for exception-less system calls.&lt;br /&gt;
        o Provides ability to designate a core to run all system calls.&lt;br /&gt;
   - Exception-less Syscall Interface&lt;br /&gt;
        o Set of memory pages shared between user and kernel modes. Referred to as Syscall pages.&lt;br /&gt;
        o User-space threads find a free entry in a syscall page and place a request for a system call. The user-space thread can then continue executing without interruption and must then return to the syscall&lt;br /&gt;
 page to get the return value from the system call.&lt;br /&gt;
        o Neither issuing the system call (via the syscall page) nor getting the return value generate an exception.&lt;br /&gt;
   - Syscall pages&lt;br /&gt;
        o Each page is a table of syscall entries.&lt;br /&gt;
        o Each syscall entre has a state:&lt;br /&gt;
                 Free – means a syscall can be added her&lt;br /&gt;
                 Submitted – means the kernel can proceed to invoke the appropriate system call operations.&lt;br /&gt;
                 Done – means the kernel is finished and has provided the return value to the syscall entry. User space thread must return and get the return value from the page.&lt;br /&gt;
   - Decoupling Execution from Invocation&lt;br /&gt;
        o To separate these two concepts a special kernel thread, syscall thread, is used.&lt;br /&gt;
        o Sole purpose is to pull requests from syscall pages and execute them always in kernel mode.&lt;br /&gt;
        o Syscall threads provide the ability to schedule the system calls on specific cores.&lt;br /&gt;
System Calls Galore – FlexSC-Threads&lt;br /&gt;
   - Programming for exception-less system calls requires a different and more complex way of interacting with the kernel for OS functionality.&lt;br /&gt;
        o The researchers describe working with exception-less system calls as being similar to event-driven programming in that you do not get the same sequential execution of code as you do with synchronous&lt;br /&gt;
 system calls.&lt;br /&gt;
        o In event-driven servers, the researchers suggest using a hybrid of both exception-less system calls (for performance critical paths) and regular synchronous system calls (for less critical system calls).&lt;br /&gt;
FlexSC-Threads&lt;br /&gt;
   - Threading package which transforms synchronous system calls into exception-less system calls.&lt;br /&gt;
   - Intended use is with server-type applications with which have many user-mode threads (like Apache or MySQL).&lt;br /&gt;
   - Compatible with both POSIX threads and the default Linux thread library.&lt;br /&gt;
        o As a result, multi-threaded Linux programs are immediately compatible with FlexSC threads without modification.&lt;br /&gt;
   - For multi-core systems, a single kernel level thread is created for each core of the system. Multiple user-mode threads are multiplexed onto each kernel level thread via interactions with the syscall pages.&lt;br /&gt;
        o The syscall pages are private to each kernel level thread, this means each core of a system has a syscall page from which it will receive system calls.&lt;br /&gt;
Overhead:&lt;br /&gt;
   - When running a single exception-less system call against a single synchronous system call, the exception-less call was slower.&lt;br /&gt;
   - When running a batch of exception-less system calls compared to a bunch of synchronous system calls, the exception-less system calls were much faster.&lt;br /&gt;
   - The same is true for a remote server situation, one synchronous call is much faster than one exception-less system call but a batch of exception-less system calls is faster than the same number&lt;br /&gt;
 of synchronous system calls.&lt;br /&gt;
Related Work:&lt;br /&gt;
   - System Call Batching&lt;br /&gt;
        o Operating systems have a concept called multi-calls which involves collecting multiple system calls and submitting them as a single system call.&lt;br /&gt;
        o The Cassyopia compiler has an additional process called a looped multi-call where the result of one system call can be fed as an argument to another system call in the same multi-call.&lt;br /&gt;
        o Multi-calls do not investigate parallel execution of system calls, nor do they address the blocking of system calls like exception-less system calls do.&lt;br /&gt;
                 Multi-call system calls are executed sequentially, each one must complete before the next may start.&lt;br /&gt;
   - Locality of Execution and Multicores&lt;br /&gt;
        o Other techniques include Soft Timers and Lazy Receiver Processing which try to tackle the issue of locality of execution by handling device interrupts. They both try to&lt;br /&gt;
 limit processor interference associated with interrupt handling without affecting the latency of servicing requests.&lt;br /&gt;
        o Computation Spreading is another locality process which is similar to FlexSC.&lt;br /&gt;
                 Processor modifications that allow hardware migration of threads and migration to specialized cores.&lt;br /&gt;
                 Did not model TLBs and on current hardware synchronous thread migration is a costly interprocessor interrupt.&lt;br /&gt;
        o Also have proposals for dedicating CPU cores to specific operating system functionality.&lt;br /&gt;
                 These solutions require a microkernel system.&lt;br /&gt;
                 Also FlexSC can dynamically adapt the proportion of cores used by the kernel or cores shared by user and kernel execution dynamically.&lt;br /&gt;
   - Non-blocking Execution&lt;br /&gt;
        o Past research on improving system call performance has focused on blocking versus non-blocking behaviour.&lt;br /&gt;
                 Typically researchers used threading, event-based (non-blocking) and hybrid systems to obtain high performance on server applications.&lt;br /&gt;
        o Main difference between past research and FlexSC is that none of the past proposals have decoupled system call execution from system call invocation.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 04:03, 20 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=5699</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=5699"/>
		<updated>2010-11-29T21:01:59Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: /* Critique: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay.&lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
In order to fully understand the FlexSC paper, it is essential to understand the key concepts that are discussed within the paper. Here listed below, are the main concepts required to fully comprehend the paper. &lt;br /&gt;
&lt;br /&gt;
===System Call===&lt;br /&gt;
A &amp;lt;b&amp;gt;System Call&amp;lt;/b&amp;gt; is the gateway between the User Space and the Kernel Space. The User Space is not given direct access to the Kernel&#039;s services, for several reasons (one being security), hence System calls are the messengers between the User and Kernel Space.[1][4]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode Switch===&lt;br /&gt;
&amp;lt;b&amp;gt;Mode Switches&amp;lt;/b&amp;gt; speak of moving from one medium to another. Specifically moving from the User Space mode to the Kernel mode or Kernel mode to User Space. It does not matter which direction or which modes we are swtiching from, this is simply a general term. Crucial to mode switching is the &amp;lt;b&amp;gt;mode switch time&amp;lt;/b&amp;gt; which is the time necessary to execute a system call instruction in user-mode, perform the kernel mode execution of the system call, and finally return the execution back to user-mode.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Synchronous System Call===&lt;br /&gt;
&amp;lt;b&amp;gt;Synchronous Execution Model(System call Interface)&amp;lt;/b&amp;gt; refers to the structure in which system calls specifically are managed in a serialized manner. Moreover, the synchronous model completes one system call at a time, and does not move onto the next system call until the previous system call is finished executing. This form of system call is blocking, meaning the process which initiates the system call is blocked until the system call returns. Traditionally, operating system calls are mostly synchronous system calls.[1][2]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Asynchronous System Call===&lt;br /&gt;
An &amp;lt;b&amp;gt;asynchronous system call&amp;lt;/b&amp;gt; is a system call which does not block upon invocation; control of execution is returned to the calling process immediately. Asynchronous system calls do not necessarily execute in order and can be compared to event driven programming.[2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Pollution===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Pollution&amp;lt;/b&amp;gt; is a more sophisticated manner of referring to wasteful or un-necessary delay in the system caused by system calls. This pollution is in direct correlation with the fact that the system call invokes a mode swtich which is not a costless task. The &amp;quot;pollution&amp;quot; involved takes the form of data over-written in critical processor structures like the TLB (translation lookaside buffer - table which reduces the frequency of main memory access for page table entries), branch prediction tables, and the cache (L1, L2, L3).[1][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Processor Exceptions===&lt;br /&gt;
&amp;lt;b&amp;gt;Processor exceptions&amp;lt;/b&amp;gt; are situations which cause the processor to stop current execution unexpetedly in order to handle the issue. There are many situations which generate processor exceptions including undefined instructions and software interrupts(system calls).[5]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Batching&amp;lt;/b&amp;gt; is the concept of collecting system calls together to be executed in a group instead of executing them immediately after they are called.[6]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Temporal and Spatial Locality===&lt;br /&gt;
Locality is the concept that during execution there will be a tendancy for the same set of data to be accessed repeatedly over a brief time period. There are two imprtant forms of locality; &amp;lt;b&amp;gt; spatial locality&amp;lt;/b&amp;gt; and &amp;lt;b&amp;gt;temporal locality&amp;lt;/b&amp;gt;. Spatial locality refers to the pattern that memory locations in close physical proximity will be referenced close together in a short period of time. Temporal locality, on the other hand, is the tendency of recently requested memory locations to be requested again.[7][8]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Instructions Per Cycle (IPC)===&lt;br /&gt;
&amp;lt;b&amp;gt;Instructions per cycle&amp;lt;/b&amp;gt; is the amount of instructions a processor can execute in a single clock cycle.[9]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
System calls provide an interface for user-mode applications to request services from the operating system. Traditionally, the system call interface has been implemented using synchronous system calls, which block the calling user-space process when the system call is initiated. The benefit of using synchronous system calls comes from the easy to program nature of having sequential operation. However, this ease of use also comes with undesireable side effects which can slow down the instructions per cycle (IPC) of the processor.[9] In &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, Soares and Stumm attempt to provide a new form of system call which minimizes the negative effects of synchronous system calls while still remaining easy to implement for application programmers.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The negative effects of synchronous system calls have been researched heavily, it is accepted that although easy to use, they are not optimal. Previous research includes work into &amp;lt;b&amp;gt;system call batching&amp;lt;/b&amp;gt; such as multi-calls[6], &amp;lt;b&amp;gt;locality of execution with multicore systems&amp;lt;/b&amp;gt;[7][8], and &amp;lt;b&amp;gt;non-blocking execution&amp;lt;/b&amp;gt;. System call batching shares great similarity with FlexSC as multiple system calls are grouped together to reduce the amount of mode switches required of the system.[6] The difference is multi-calls do not make use of parallel execution of system calls nor do they manage the blocking aspect of synchronous system calls. FlexSC describes methods to handle both of these situations as described in the &amp;lt;b&amp;gt;Contribution&amp;lt;/b&amp;gt; section of this document.[1] Previous research into locality of execution and multicore systems has focused on managing device interrupts and limiting processor interference associated with interrupt handling.[7][8] However, these solutions require a microkernel solution and although they can dedicate certain execution to specific cores of a system, they can not dynamically adapt to the proportion of cores used by the kernel and the cores shared between the kernel and the user like FlexSC can.[1] Non-blocking execution research has focused on threading, event-based (non-blocking) and hybrid solutions. However, FlexSC provides a mechanism to separate system call execution from system call invocation. This is a key difference between FlexSC and previous research.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
===Exception-Less System Calls===&lt;br /&gt;
Exception-less system calls are the research team&#039;s attempt to provide an alternative to synchronous systems calls. The downside to synchronous system calls includes the cumulative mode switch time of multiple system calls each called independently, state pollution of key processor structures (TLB, cache, etc.)[1][3], and, potentially the most crucial, the performance impact on the user-mode application during a system call. Exception-less system calls attempt to resolve these three issues through:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
1. &amp;lt;u&amp;gt;System Call Batching:&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Instead of having each system call run as soon as it is called, FlexSC instead groups together system calls into batches. These batches can then be executed at one time thus minimizing the     frequency of mode switches bewteen user and kernel modes. Batching provides a benefit both in terms of the direct cost of mode switching as well as the indirect cost, pollution of critical processor structures, associated with switching modes.[1] System call batching works by first requesting as many system calls as possible, then switching to kernel mode, and then executing each of them.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2. &amp;lt;u&amp;gt;Core Specialization&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
On a multi-core system, FlexSC can provide the ability to designate a single core to run all system calls. The reason this is possible is that for an exception-less system call, the system call execution is decoupled from the system call invocation. This is described further in &amp;lt;b&amp;gt;Decoupling Execution from Invocation&amp;lt;/b&amp;gt; section below.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
3. &amp;lt;u&amp;gt;Exception-less System Call Interface&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
To provide an asynchronous interface to the kernel, FlexSC uses &amp;lt;b&amp;gt;syscall pages&amp;lt;/b&amp;gt;. Syscall pages are a set of memory pages shared between user-mode and kernel-mode. User-space threads interact with syscall pages in order to make a request (system call) for kernel-mode procedures. A user-mode thread may make a system call request on a free entry of a syscall page, the syscall page will then run once the batch condition is met and store the return value on the syscall page. The user-mode thread can then return to the syscall page to obtain the return value. Neither issuing the system call via the syscall page nor getting the return value from the syscall page generate a processor exception. Each syscall page is a table of syscall entries. These entries may have one of three states: &amp;lt;b&amp;gt;Free&amp;lt;/b&amp;gt; - meaning a syscall can be added to the entry; &amp;lt;b&amp;gt;Submitted&amp;lt;/b&amp;gt; - meaning the kernel can proceed to invoke the appropriate system call operations; and &amp;lt;b&amp;gt;Done&amp;lt;/b&amp;gt; - meaning the kernel is finished and the return value is ready for the user-mode thread to retrieve it.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
4. &amp;lt;u&amp;gt;Decoupling Execution from Invocation&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In order to separate a system call invocation from the execution of the system call, &amp;lt;b&amp;gt;syscall threads&amp;lt;/b&amp;gt; were created. The sole purpose of syscall threads is to pull requests from syscall pages and execute the request, always in kernel mode. This is the mechanic that allows exception-less system calls to provide the ability for a user-mode thread to issue a request and continue to run while the kernel level system call is being executed. In addition, since the system call invocation is separate from execution, a process running on one core may request a system call yet the execution of the system call may be completed on an entirely different core. This allows exception-less system calls the unique capability of having all system call execution delegated to a specific core while other cores maintain user-mode execution.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===FlexSC Threads===&lt;br /&gt;
As mentioned above, FlexSC threads are a key component of the exception-less system call interface. FlexSC threads transform regular, synchronous system calls into exception-less system calls and are compatible with both the POSIX and default Linux thread libraries. This means that FlexSC Threads are immediately capable of running multi-threaded Linux applications with no modifications. The intended use of these threads is with server-type applications which contain many user-mode threads. In order to accomodate multiple user-mode threads, the FlexSC interface provides a syscall page for each core of a system. In this manner, multiple user-mode threads can be multiplexed onto a single syscall page which in turn has a single kernel level thread to facilitate execution of the system calls. Programming with FlexSC threads can be compared to event-driven programming as interactions are not guaranteed to be sequential. This does increase the complexity of programming for an exception-less system call interface as compared to the relatively simple synchronous system call interface.[1][2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Critique: ==&lt;br /&gt;
&lt;br /&gt;
===Moore&#039;s Law===&lt;br /&gt;
One interesting aspect of this paper is how the research relates to Moore&#039;s Law. Moore&#039;s Law states that the number of transistors on a chip doubles every 18 months.[10]. This has lead to very large increases in the performance potential of software but at the same time has opened a large gap between the actual performance of efficient and inefficient software. This paper claims that the gap is mainly caused by dispairity of accessing different processor resources such as registers, cache and memory.[1] In this manner, the FlexSC interface is not just an attempt to increase the efficiency of current system calls, but it is actually an attempt to change the way we view software. It is not simply enough to continue to build more powerful machines if the code we currently run will not speed up (become more efficient) along with the gain of power. Instead we need to focus on appropriate allocation and usage of the power as failure to do so is the origination of the gap between our potential and our performance.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Performance of FlexSC===&lt;br /&gt;
It is of particular interest to note that exception-less system calls only outperformed synchronous system calls when the system was running multiple system calls. For an individual system call, the overhead of the FlexSC interface was greater than a synchronous call. The real benefit of FlexSC comes when there are many system calls which can be in turn batched before execution. In this situation the FlexSC system far outperformed the traditional synchronous system calls.[1] This is why the research paper&#039;s focus is on server-like applications as server must handle many user requests efficiently to be useful. Thus, for a general case it appears that a hybrid solution of synchronous calls below some threshold and execption-less system calls above the same threshold would be most efficient.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Blocking Calls===&lt;br /&gt;
FlexSC relies on the fact that web and database servers have a lot of concurrency and independent parallelism. FlexSC can &#039;harvest&#039; enough independent work so that it doesn&#039;t need to track dependencies between system calls. However this could be a problem in other situations. Since FlexSC system calls are &#039;inherently asynchronous&#039;, if they need to block, FlexSC would jump to the next system call and execute that one. This can cause a problem for system calls such as reading and writing, where the write call has an outstanding dependency on the read call. This could be resolved however by using some kind of combined system call, that is, multiple system calls executed as one single call. However, FlexSC does not have any current handling for such.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Core Scheduling Issues===&lt;br /&gt;
In a system with X cores, FlexSC needs to dedicate some subset of cores for system calls. Currently, FlexSC first wakes up core X to run a system call thread, and when another batch comes in, if core X is still busy, it will then try core X-1, and so one. This seemed to be the most efficient algorithm for FlexSC scheduling, but, this was only tested with FlexSC running a single application at a time. FlexSC&#039;s scheduling algorithm would need to be &amp;quot;fined tuned&amp;quot; for running multiple applications.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When There Are Not More Threads Then Cores===&lt;br /&gt;
In situations where there is a single thread using 100% of a CPU, and acting primarily in user-space, such as &#039;Scientific Programs&#039;, FlexSC causes more overhead then performance gains.&lt;br /&gt;
&lt;br /&gt;
== Related Work: ==&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&lt;br /&gt;
Muti-calls is a concept which involves collecting multiple system calls and submitting them as a single system call. It is used both in operating systems and paravirtualized hypervisors. The Cassyopia compiler has a special technique name a looped multi-call, which is an additional process where the result of one system call can be fed as an argument to another system call in the same multi-call.[11] There is a significant difference between multi-calls and exception-less system calls. Multi-calls do not investigate parallel execution of system calls, nor do they address the blocking of system calls like exception-less system calls. Multi-call system calls are executed sequentially, each one must complete before the next may start. On the other hand, exception-less system calls can be executed in parallel, and in the presence of blocking, the next call can execute immediately.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Locality of Execution and Multicores===&lt;br /&gt;
&lt;br /&gt;
Several techniques addressed the issue of locality of execution. Larus and Parkes proposed Cohort Scheduling to efficiently execute staged computations.[12] Other techniques include Soft Timers[13] and Lazy Receiver[14] Processing which try to tackle the issue of locality of execution by handling device interrupts. They both try to limit processor interference associated with interrupt handling without affecting the latency of servicing requests. Another technique name Computation Spreading[15] is most similar to the multicore execution of FlexSC. Processor modifications that allow hardware migration of threads and migration to specialized cores. However, they did not model TLBs on current hardware synchronous thread migration is a costly interprocessor interrupt. Another solution has 2 difference between FlexSC. They require a micro-kernel. Also FlexSC can dynamically adapt the proportion of cores used by the kernel or cores shared by user and kernel execution dynamically. While all these solutions rely on expensive inter-processor interrupts to offload system calls, FlexSC could provide a more efficient, and flexible mechanism.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Non-blocking Execution===&lt;br /&gt;
&lt;br /&gt;
Past research on improving system call performance has focused extensively on blocking versus non-blocking behavior. Typically researchers used threading, event-based, which is non-blocking and hybrid systems to obtain high performance on server applications. The main difference between many of the proposals for non-blocking execution and FlexSC is that none of the non-blocking system calls have decoupled the system call invocation from its execution.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References: ==&lt;br /&gt;
[1] Soares, Livio and Michael Stumm, &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, University of Toronto, 2010.[http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Tanenbaum, Andrew S., &amp;lt;i&amp;gt;Modern Operating Systems: 3rd Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2008.&lt;br /&gt;
&lt;br /&gt;
[3] Stallings, William, &amp;lt;i&amp;gt;Operating Systems: Internals and Design Principles - 6th Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2009.&lt;br /&gt;
&lt;br /&gt;
[4] Garfinkel, Tim, &amp;lt;i&amp;gt;Traps and Pitfalls: Practical Problems in System Call Interposition Based Security Tools&amp;lt;/i&amp;gt;, Computer Science Department - Stanford University.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.144.2695&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[5] Yoo, Sunjoo &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Automatic Generation of Fast Timed Simulation Models for Operating Systems in SoC Design&amp;lt;/i&amp;gt;, SLS Group, TIMA Laboratory, Grenoble, 2002.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.1148&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[6] Rajagopalan, Mohan &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Cassyopia: Compiler Assisted System Optimization&amp;lt;/i&amp;gt;, Poceedings of HotOS IX: The 9th Workshop on Hot Topics in Operating Systems, Lihue, Hawaii, 2003.[https://www.usenix.org/events/hotos03/tech/full_papers/rajagopalan/rajagopalan.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[7] Kumar, Sanjeev and Christopher Wilkerson, &amp;lt;i&amp;gt;Exploiting Spatial Locality in Data Caches using Spatial Footprints&amp;lt;/i&amp;gt;, Princeton University and Microcomputer Research Labs (Oregon), 1998.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.1550&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[8] Jin, Shudong and Azer Bestavros, &amp;lt;i&amp;gt;Sources and Characteristics of Web Temporal Locality&amp;lt;/i&amp;gt;, Computer Science Depratment - Boston University, Boston. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.94.5941&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[9] Agarwal, Vikas &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Clock Rate versus IPS: The End of the Road for Conventional Microarhitechtures&amp;lt;/i&amp;gt;, University of Texas, Austin, 2000.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.3694&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[10] Tuomi, Ilkka, &amp;lt;i&amp;gt;The Lives and Death of Moore&#039;s Law&amp;lt;/i&amp;gt;, 2002.[http://131.193.153.231/www/issues/issue7_11/tuomi/ HTML]&lt;br /&gt;
&lt;br /&gt;
[11] BARHAM, P., DRAGOVIC, B., FRASER, K., HAND, S., HARRIS, T., HO, A., NEUGEBAUER, R., PRATT, I., AND WARFIELD, A. Xen and the art of virtualization. In Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP) (2003), pp. 164–177.&lt;br /&gt;
&lt;br /&gt;
[12] LARUS, J., AND PARKES, M. Using Cohort-Scheduling to Enhance Server Performance. In Proceedings of the annual conference on USENIX Annual Technical Conference (ATEC) (2002), pp. 103–114.&lt;br /&gt;
&lt;br /&gt;
[13] ARON, M., AND DRUSCHEL, P. Soft timers: efficient microsecond software timer support for network processing. ACM Trans. Comput. Syst. (TOCS) 18, 3 (2000), 197–228.&lt;br /&gt;
&lt;br /&gt;
[14] DRUSCHEL, P., AND BANGA, G. Lazy receiver processing (LRP): a network subsystem architecture for server systems. In Proceedings of the 2nd USENIX Symposium on Operating Systems Design and Implementation (OSDI) (1996), pp. 261–275.&lt;br /&gt;
&lt;br /&gt;
[15] CHAKRABORTY, K., WELLS, P. M., AND SOHI, G. S. Computation Spreading: Employing Hardware Migration to Specialize CMP Cores On-the-fly. In Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) (2006), pp. 283–292.&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=5698</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=5698"/>
		<updated>2010-11-29T20:32:06Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: /* Who is working on what ? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group 3 Essay=&lt;br /&gt;
&lt;br /&gt;
Hello everyone, please post your contact information here:&lt;br /&gt;
&lt;br /&gt;
Ben Robson [mailto:brobson@connect.carleton.ca brobson@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Rey Arteaga: rarteaga@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Corey Faibish: [mailto:corey.faibish@gmail.com corey.faibish@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Tawfic Abdul-Fatah: [mailto:tfatah@gmail.com tfatah@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Fangchen Sun: [mailto:sfangche@connect.carleton.ca sfangche@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Mike Preston: [mailto:michaelapreston@gmail.com michaelapreston@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Wesley L. Lawrence: [mailto:wlawrenc@connect.carleton.ca wlawrenc@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Can&#039;t access the video without a login as we found out in class, but you can listen to the speech and follow with the slides pretty easily, I just went through it and it&#039;s not too bad. Rarteaga&lt;br /&gt;
&lt;br /&gt;
==Question 3 Group==&lt;br /&gt;
*Abdul-Fatah Tawfic tafatah&lt;br /&gt;
*Arteaga Reynaldo rarteaga&lt;br /&gt;
*Faibish Corey   cfaibish&lt;br /&gt;
*Lawrence Wesley wlawrenc&lt;br /&gt;
*Preston Mike    mpreston&lt;br /&gt;
*Robson  Benjamin brobson&lt;br /&gt;
*Sun     Fangchen sfangche&lt;br /&gt;
&lt;br /&gt;
==Who is working on what ?==&lt;br /&gt;
Just to keep track of who&#039;s doing what --[[User:Tafatah|Tafatah]] 01:37, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Hey everyone, I have taken the liberty of trying to provide a good first start on our paper. I have provided many resources and filled in information for all of the sections. This is not complete, but it should make the rest of the work a lot easier. Please go through and add in pieces that I am missing (specifically in the Critique section) and then we can put this essay to bed. Also, please note that below I have included my notes on the paper so that if anyone feels they do not have time to read the paper, they can read my notes instead and still find additional materials to contribute with.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:22, 20 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Man, Mike: you did a nice job! I&#039;m reading through it now very thorough :) Since you pretty much turned all of your bulleted points from the discussion page into that on the main page, what else needs to be done? Just expanding on each topic and sub-topic? Or are there untouched concepts/topics that we should be addressing?&lt;br /&gt;
Oh and question two: Should we turn the Q&amp;amp;A from the end of the video of the presentation into information for the &#039;&#039;Critique&#039;&#039; section?&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 20:34, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Mike, thnx for the great job! I basically finished the part of related work based on your draft.&lt;br /&gt;
--[[User:sfangchen|Fangchen Sun]] 17:40, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
No problem, And great additions. &lt;br /&gt;
In terms of what needs to be done, I do believe that adding some detail to the critique is where we really need some focus. Using the Q&amp;amp;A from the video is probably a great source of inspiration, maybe just take a look at the topics presented, see if additional material from other sources can be obtain and use those sources to address any pros or cons to this artical. Remember, the critique section can be agreeing or disagreeing with what is presented in the actual paper.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 15:12, 28 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I noticied we needed some work in the Critique section, so I listened to the Q&amp;amp;A session at the end of the FlexSC mp3 talk, and took some quick notes. There seems to be 3 good ones (of the 9) that I picked out. I&#039;ll summarize them and add to the Critique section, specifically questions 3, 6, and 7. If anyone else wants to have listen to a specific question, and maybe try to do some more &#039;critiquing&#039; here is a list of what time the questions each take place, and a very general statement on what the question is about, and the very general answer:&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;1 - 22:30 &amp;lt;br&amp;gt;Q: Did the paper consider Upstream patches(?) &amp;lt;br&amp;gt;A:No&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;2 - 23:00 &amp;lt;br&amp;gt;Q: Security issues with the pages &amp;lt;br&amp;gt;A:Pages pre-processor, no issue&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;3 - 24:10 &amp;lt;br&amp;gt;Q: What about blocking calls (read/write)? &amp;lt;br&amp;gt;A: Not handled&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;4 - 25:50 &amp;lt;br&amp;gt;Q: ? &amp;lt;br&amp;gt;A: Not a problem? (Personally didn&#039;t understand question, don&#039;t believe it&#039;s important, but anyone whose willing should double check)&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;5 - 28:00 &amp;lt;br&amp;gt;Q: Compare pollution between user thread switching to user-kernel thread switching? &amp;lt;br&amp;gt;A: No, only looked at and measured pollution when switching user-to-kernel.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;6 - 29:30 &amp;lt;br&amp;gt;Q: Scheduling problems of what cores are &#039;system&#039; core, and what cores are &#039;user&#039; cores &amp;lt;br&amp;gt;A: Very simple algorithm, but not tested when running multiple apps, would need to be &amp;quot;fine-tuned&amp;quot;&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;7 - 31:00 &amp;lt;br&amp;gt;Q: Situations where FlexSC is bad, when running less or equal threads to the number of cores, such as &amp;quot;Scientific programs&amp;quot;, mostly in userspace where one thread has 100% CPU resource &amp;lt;br&amp;gt;A: Agrees, FlexSC is not to be used for such situations&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;8 - 33:00 &amp;lt;br&amp;gt;Q: Problems with un-related threads demanding service, how does it scale? Issue with frequency of polling could cause sys calls to take time to preform &amp;lt;br&amp;gt;A: (Would be answered offline)&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;9 - 34:30 &amp;lt;br&amp;gt;Q: Backwards compatability and robustness &amp;lt;br&amp;gt;A: Only an issue with getTID (Thread ID), needed a small patch.&lt;br /&gt;
&lt;br /&gt;
--[[User:Wlawrence|Wesley Lawrence]] 20:31, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Paper Summary==&lt;br /&gt;
I am not sure if everyone has taken the time to examine the paper closely, so I thought I would provide my notes on the paper so that anyone who has not read it could have a view of the high points.&lt;br /&gt;
&lt;br /&gt;
Abstract:&lt;br /&gt;
   - System calls are the accepted way to request services from the OS kernel, historical implementation.&lt;br /&gt;
   - System calls almost always synchronous &lt;br /&gt;
   - Aim to demonstrate how synchronous system calls negatively affect performance due mainly to pipeline flushing and pollution of key processor structures (TLB, data and instruction caches, etc.)&lt;br /&gt;
        o TLB is translation lookaside buffer which is uses pages (data and code pages) to speed up virtual translation speed.&lt;br /&gt;
   - Propose exception-less system calls to improve the current system call process.&lt;br /&gt;
        o Improve processor efficiency via enabling flexible scheduling of OS work which in turn reduces size of execution both in kernel and user space thus reducing pollution effects on processor structures.&lt;br /&gt;
   - Exception-less system calls especially effective on multi-core systems running multi-threaded applications.&lt;br /&gt;
   - FlexSC is an implementation of exception-less system calls in the Linux kernel with accompanying user-mode threads from FlexSC-Threads package.&lt;br /&gt;
        o Flex-SC-Threads convert legacy system calls into exception-less system calls.&lt;br /&gt;
Introduction:&lt;br /&gt;
   - Synchronous system calls have a negative impact on system performance due to:&lt;br /&gt;
        o Direct costs – mode switching&lt;br /&gt;
        o Indirect costs – pollution of important processor structures &lt;br /&gt;
   - Traditional system calls:&lt;br /&gt;
        o Involve writing arguments to appropriate registers as well as issuing a special machine instruction which raises a synchronous exception.&lt;br /&gt;
        o A processor exception is used to communicate with the kernel.&lt;br /&gt;
        o Synchronous execution is enforced as the application expects the completion of the system call before user-mode execution resumes.&lt;br /&gt;
   - Moore’s Law has provided large increases to performance potential of software while at the same time widening the gap between the performance of efficient and inefficient software.&lt;br /&gt;
        o This gap is mainly caused by disparity of accessing different processor resources (registers, caches, memory)&lt;br /&gt;
   - Server and system-intensive workloads are known to perform well below processor potential throughput.&lt;br /&gt;
        o These are the items the researchers are mostly interested in.&lt;br /&gt;
        o The cause is often described as due to the lack of locality.&lt;br /&gt;
        o The researchers state this lack of locality is in part a result of the current synchronous system calls.&lt;br /&gt;
   - When a synchronous system call, like pwrite, is called, the instruction per cycle level drops significantly and it takes many (in the example 14,000) cycles of execution for the instruction per cycle rate&lt;br /&gt;
 to return to the level it was at before the system (pwrite) call.&lt;br /&gt;
   - Exception-less System Call:&lt;br /&gt;
        o Request for kernel services that does not require the use of synchronous processor exceptions.&lt;br /&gt;
        o System calls are written to a reserved syscall page.&lt;br /&gt;
        o Execution of system calls is performed asynchronously by special kernel level syscall threads. The result of the execution is stored on the syscall page after execution.&lt;br /&gt;
   - By separating system call execution from system call invocation, the system can now have flexible system call scheduling.&lt;br /&gt;
        o This allows system calls to be executed in batches, increasing the temporal locality of execution.&lt;br /&gt;
        o Also provides a way to execute system calls on a separate core, in parallel to user-mode thread execution. This provides spatial per-core locality.&lt;br /&gt;
        o An additional side effect is that now a multi-core system can have individual cores designated to run either user-mode or kernel mode execution dynamically depending on the current system load.&lt;br /&gt;
   - In order to implement the exception-less system calls, the research team suggests adding a new M-on-N threading package.&lt;br /&gt;
        o M user-mode threads executing on N kernel-visible threads.&lt;br /&gt;
        o This would allow the threading package to harvest independent system calls, by switching threads, in user-mode, whenever a thread invokes a system call.&lt;br /&gt;
The (Real) Cost of System Calls&lt;br /&gt;
   - Traditional way to measure the performance cost of system calls is the mode switch time. This is the time necessary to execute the system call instruction in user-mode, resume execution in kernel mode and&lt;br /&gt;
 then return execution back to the user-mode.&lt;br /&gt;
   - Mode switch in modern processors is a processor exception.&lt;br /&gt;
        o Flush the user-mode pipeline, save registers onto the kernel stack, change the protection domain and redirect execution to the proper exception handler.&lt;br /&gt;
   - Another measure of the performance of a system call is the state pollution caused by the system call.&lt;br /&gt;
        o State pollution is the measure of how much user-mode data is overwritten in places like the TLB, cache (L1, L2, L3), branch prediction tables with kernel leel execution instructions for the system call. &lt;br /&gt;
        o This data must be re-populated upon the return to user-mode.&lt;br /&gt;
   - Potentially the most significant measure of cost of system calls is the performance impact on a running application.&lt;br /&gt;
        o Ideally, user-mode instructions per cycle should not decrease as a result of a system call.&lt;br /&gt;
        o Synchronous system calls do cause a drop in user-mode IPC  due to; direct overhead -  the processor exception associated with the system call which flushes the processor pipeline; and indirect overhead&lt;br /&gt;
 – system call pollution on processors structures.&lt;br /&gt;
Exception-less System calls:&lt;br /&gt;
   - System call batching&lt;br /&gt;
        o By delaying a series of system calls and executing them in batches you can minimize the frequency of mode switches between user and kernel mode.&lt;br /&gt;
        o Improves both the direct and indirect cost of system calls.&lt;br /&gt;
   - Core specialization&lt;br /&gt;
        o A system call can be scheduled on a different core then the core on which it was invoked, only for exception-less system calls.&lt;br /&gt;
        o Provides ability to designate a core to run all system calls.&lt;br /&gt;
   - Exception-less Syscall Interface&lt;br /&gt;
        o Set of memory pages shared between user and kernel modes. Referred to as Syscall pages.&lt;br /&gt;
        o User-space threads find a free entry in a syscall page and place a request for a system call. The user-space thread can then continue executing without interruption and must then return to the syscall&lt;br /&gt;
 page to get the return value from the system call.&lt;br /&gt;
        o Neither issuing the system call (via the syscall page) nor getting the return value generate an exception.&lt;br /&gt;
   - Syscall pages&lt;br /&gt;
        o Each page is a table of syscall entries.&lt;br /&gt;
        o Each syscall entre has a state:&lt;br /&gt;
                 Free – means a syscall can be added her&lt;br /&gt;
                 Submitted – means the kernel can proceed to invoke the appropriate system call operations.&lt;br /&gt;
                 Done – means the kernel is finished and has provided the return value to the syscall entry. User space thread must return and get the return value from the page.&lt;br /&gt;
   - Decoupling Execution from Invocation&lt;br /&gt;
        o To separate these two concepts a special kernel thread, syscall thread, is used.&lt;br /&gt;
        o Sole purpose is to pull requests from syscall pages and execute them always in kernel mode.&lt;br /&gt;
        o Syscall threads provide the ability to schedule the system calls on specific cores.&lt;br /&gt;
System Calls Galore – FlexSC-Threads&lt;br /&gt;
   - Programming for exception-less system calls requires a different and more complex way of interacting with the kernel for OS functionality.&lt;br /&gt;
        o The researchers describe working with exception-less system calls as being similar to event-driven programming in that you do not get the same sequential execution of code as you do with synchronous&lt;br /&gt;
 system calls.&lt;br /&gt;
        o In event-driven servers, the researchers suggest using a hybrid of both exception-less system calls (for performance critical paths) and regular synchronous system calls (for less critical system calls).&lt;br /&gt;
FlexSC-Threads&lt;br /&gt;
   - Threading package which transforms synchronous system calls into exception-less system calls.&lt;br /&gt;
   - Intended use is with server-type applications with which have many user-mode threads (like Apache or MySQL).&lt;br /&gt;
   - Compatible with both POSIX threads and the default Linux thread library.&lt;br /&gt;
        o As a result, multi-threaded Linux programs are immediately compatible with FlexSC threads without modification.&lt;br /&gt;
   - For multi-core systems, a single kernel level thread is created for each core of the system. Multiple user-mode threads are multiplexed onto each kernel level thread via interactions with the syscall pages.&lt;br /&gt;
        o The syscall pages are private to each kernel level thread, this means each core of a system has a syscall page from which it will receive system calls.&lt;br /&gt;
Overhead:&lt;br /&gt;
   - When running a single exception-less system call against a single synchronous system call, the exception-less call was slower.&lt;br /&gt;
   - When running a batch of exception-less system calls compared to a bunch of synchronous system calls, the exception-less system calls were much faster.&lt;br /&gt;
   - The same is true for a remote server situation, one synchronous call is much faster than one exception-less system call but a batch of exception-less system calls is faster than the same number&lt;br /&gt;
 of synchronous system calls.&lt;br /&gt;
Related Work:&lt;br /&gt;
   - System Call Batching&lt;br /&gt;
        o Operating systems have a concept called multi-calls which involves collecting multiple system calls and submitting them as a single system call.&lt;br /&gt;
        o The Cassyopia compiler has an additional process called a looped multi-call where the result of one system call can be fed as an argument to another system call in the same multi-call.&lt;br /&gt;
        o Multi-calls do not investigate parallel execution of system calls, nor do they address the blocking of system calls like exception-less system calls do.&lt;br /&gt;
                 Multi-call system calls are executed sequentially, each one must complete before the next may start.&lt;br /&gt;
   - Locality of Execution and Multicores&lt;br /&gt;
        o Other techniques include Soft Timers and Lazy Receiver Processing which try to tackle the issue of locality of execution by handling device interrupts. They both try to&lt;br /&gt;
 limit processor interference associated with interrupt handling without affecting the latency of servicing requests.&lt;br /&gt;
        o Computation Spreading is another locality process which is similar to FlexSC.&lt;br /&gt;
                 Processor modifications that allow hardware migration of threads and migration to specialized cores.&lt;br /&gt;
                 Did not model TLBs and on current hardware synchronous thread migration is a costly interprocessor interrupt.&lt;br /&gt;
        o Also have proposals for dedicating CPU cores to specific operating system functionality.&lt;br /&gt;
                 These solutions require a microkernel system.&lt;br /&gt;
                 Also FlexSC can dynamically adapt the proportion of cores used by the kernel or cores shared by user and kernel execution dynamically.&lt;br /&gt;
   - Non-blocking Execution&lt;br /&gt;
        o Past research on improving system call performance has focused on blocking versus non-blocking behaviour.&lt;br /&gt;
                 Typically researchers used threading, event-based (non-blocking) and hybrid systems to obtain high performance on server applications.&lt;br /&gt;
        o Main difference between past research and FlexSC is that none of the past proposals have decoupled system call execution from system call invocation.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 04:03, 20 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=5697</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=5697"/>
		<updated>2010-11-29T20:31:36Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: /* Who is working on what ? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group 3 Essay=&lt;br /&gt;
&lt;br /&gt;
Hello everyone, please post your contact information here:&lt;br /&gt;
&lt;br /&gt;
Ben Robson [mailto:brobson@connect.carleton.ca brobson@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Rey Arteaga: rarteaga@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Corey Faibish: [mailto:corey.faibish@gmail.com corey.faibish@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Tawfic Abdul-Fatah: [mailto:tfatah@gmail.com tfatah@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Fangchen Sun: [mailto:sfangche@connect.carleton.ca sfangche@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Mike Preston: [mailto:michaelapreston@gmail.com michaelapreston@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Wesley L. Lawrence: [mailto:wlawrenc@connect.carleton.ca wlawrenc@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Can&#039;t access the video without a login as we found out in class, but you can listen to the speech and follow with the slides pretty easily, I just went through it and it&#039;s not too bad. Rarteaga&lt;br /&gt;
&lt;br /&gt;
==Question 3 Group==&lt;br /&gt;
*Abdul-Fatah Tawfic tafatah&lt;br /&gt;
*Arteaga Reynaldo rarteaga&lt;br /&gt;
*Faibish Corey   cfaibish&lt;br /&gt;
*Lawrence Wesley wlawrenc&lt;br /&gt;
*Preston Mike    mpreston&lt;br /&gt;
*Robson  Benjamin brobson&lt;br /&gt;
*Sun     Fangchen sfangche&lt;br /&gt;
&lt;br /&gt;
==Who is working on what ?==&lt;br /&gt;
Just to keep track of who&#039;s doing what --[[User:Tafatah|Tafatah]] 01:37, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Hey everyone, I have taken the liberty of trying to provide a good first start on our paper. I have provided many resources and filled in information for all of the sections. This is not complete, but it should make the rest of the work a lot easier. Please go through and add in pieces that I am missing (specifically in the Critique section) and then we can put this essay to bed. Also, please note that below I have included my notes on the paper so that if anyone feels they do not have time to read the paper, they can read my notes instead and still find additional materials to contribute with.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:22, 20 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Man, Mike: you did a nice job! I&#039;m reading through it now very thorough :) Since you pretty much turned all of your bulleted points from the discussion page into that on the main page, what else needs to be done? Just expanding on each topic and sub-topic? Or are there untouched concepts/topics that we should be addressing?&lt;br /&gt;
Oh and question two: Should we turn the Q&amp;amp;A from the end of the video of the presentation into information for the &#039;&#039;Critique&#039;&#039; section?&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 20:34, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Mike, thnx for the great job! I basically finished the part of related work based on your draft.&lt;br /&gt;
--[[User:sfangchen|Fangchen Sun]] 17:40, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
No problem, And great additions. &lt;br /&gt;
In terms of what needs to be done, I do believe that adding some detail to the critique is where we really need some focus. Using the Q&amp;amp;A from the video is probably a great source of inspiration, maybe just take a look at the topics presented, see if additional material from other sources can be obtain and use those sources to address any pros or cons to this artical. Remember, the critique section can be agreeing or disagreeing with what is presented in the actual paper.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 15:12, 28 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I noticied we needed some work in the Critique section, so I listened to the Q&amp;amp;A session at the end of the FlexSC mp3 talk, and took some quick notes. There seems to be 3 good ones (of the 9) that I picked out. I&#039;ll summarize them and add to the Critique section, specifically questions 3, 6, and 7. If anyone else wants to have listen to a specific question, and maybe try to do some more &#039;critiquing&#039; here is a list of what time the questions each take place, and a very general statement on what the question is about, and the very general answer:&lt;br /&gt;
&amp;lt;br&amp;gt;1 - 22:30 &amp;lt;br&amp;gt;Q: Did the paper consider Upstream patches(?) &amp;lt;br&amp;gt;A:No&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;2 - 23:00 &amp;lt;br&amp;gt;Q: Security issues with the pages &amp;lt;br&amp;gt;A:Pages pre-processor, no issue&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;3 - 24:10 &amp;lt;br&amp;gt;Q: What about blocking calls (read/write)? &amp;lt;br&amp;gt;A: Not handled&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;4 - 25:50 &amp;lt;br&amp;gt;Q: ? &amp;lt;br&amp;gt;A: Not a problem? (Personally didn&#039;t understand question, don&#039;t believe it&#039;s important, but anyone whose willing should double check)&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;5 - 28:00 &amp;lt;br&amp;gt;Q: Compare pollution between user thread switching to user-kernel thread switching? &amp;lt;br&amp;gt;A: No, only looked at and measured pollution when switching user-to-kernel.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;6 - 29:30 &amp;lt;br&amp;gt;Q: Scheduling problems of what cores are &#039;system&#039; core, and what cores are &#039;user&#039; cores &amp;lt;br&amp;gt;A: Very simple algorithm, but not tested when running multiple apps, would need to be &amp;quot;fine-tuned&amp;quot;&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;7 - 31:00 &amp;lt;br&amp;gt;Q: Situations where FlexSC is bad, when running less or equal threads to the number of cores, such as &amp;quot;Scientific programs&amp;quot;, mostly in userspace where one thread has 100% CPU resource &amp;lt;br&amp;gt;A: Agrees, FlexSC is not to be used for such situations&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;8 - 33:00 &amp;lt;br&amp;gt;Q: Problems with un-related threads demanding service, how does it scale? Issue with frequency of polling could cause sys calls to take time to preform &amp;lt;br&amp;gt;A: (Would be answered offline)&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;9 - 34:30 &amp;lt;br&amp;gt;Q: Backwards compatability and robustness &amp;lt;br&amp;gt;A: Only an issue with getTID (Thread ID), needed a small patch.&lt;br /&gt;
&lt;br /&gt;
--[[User:Wlawrence|Wesley Lawrence]] 20:31, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Paper Summary==&lt;br /&gt;
I am not sure if everyone has taken the time to examine the paper closely, so I thought I would provide my notes on the paper so that anyone who has not read it could have a view of the high points.&lt;br /&gt;
&lt;br /&gt;
Abstract:&lt;br /&gt;
   - System calls are the accepted way to request services from the OS kernel, historical implementation.&lt;br /&gt;
   - System calls almost always synchronous &lt;br /&gt;
   - Aim to demonstrate how synchronous system calls negatively affect performance due mainly to pipeline flushing and pollution of key processor structures (TLB, data and instruction caches, etc.)&lt;br /&gt;
        o TLB is translation lookaside buffer which is uses pages (data and code pages) to speed up virtual translation speed.&lt;br /&gt;
   - Propose exception-less system calls to improve the current system call process.&lt;br /&gt;
        o Improve processor efficiency via enabling flexible scheduling of OS work which in turn reduces size of execution both in kernel and user space thus reducing pollution effects on processor structures.&lt;br /&gt;
   - Exception-less system calls especially effective on multi-core systems running multi-threaded applications.&lt;br /&gt;
   - FlexSC is an implementation of exception-less system calls in the Linux kernel with accompanying user-mode threads from FlexSC-Threads package.&lt;br /&gt;
        o Flex-SC-Threads convert legacy system calls into exception-less system calls.&lt;br /&gt;
Introduction:&lt;br /&gt;
   - Synchronous system calls have a negative impact on system performance due to:&lt;br /&gt;
        o Direct costs – mode switching&lt;br /&gt;
        o Indirect costs – pollution of important processor structures &lt;br /&gt;
   - Traditional system calls:&lt;br /&gt;
        o Involve writing arguments to appropriate registers as well as issuing a special machine instruction which raises a synchronous exception.&lt;br /&gt;
        o A processor exception is used to communicate with the kernel.&lt;br /&gt;
        o Synchronous execution is enforced as the application expects the completion of the system call before user-mode execution resumes.&lt;br /&gt;
   - Moore’s Law has provided large increases to performance potential of software while at the same time widening the gap between the performance of efficient and inefficient software.&lt;br /&gt;
        o This gap is mainly caused by disparity of accessing different processor resources (registers, caches, memory)&lt;br /&gt;
   - Server and system-intensive workloads are known to perform well below processor potential throughput.&lt;br /&gt;
        o These are the items the researchers are mostly interested in.&lt;br /&gt;
        o The cause is often described as due to the lack of locality.&lt;br /&gt;
        o The researchers state this lack of locality is in part a result of the current synchronous system calls.&lt;br /&gt;
   - When a synchronous system call, like pwrite, is called, the instruction per cycle level drops significantly and it takes many (in the example 14,000) cycles of execution for the instruction per cycle rate&lt;br /&gt;
 to return to the level it was at before the system (pwrite) call.&lt;br /&gt;
   - Exception-less System Call:&lt;br /&gt;
        o Request for kernel services that does not require the use of synchronous processor exceptions.&lt;br /&gt;
        o System calls are written to a reserved syscall page.&lt;br /&gt;
        o Execution of system calls is performed asynchronously by special kernel level syscall threads. The result of the execution is stored on the syscall page after execution.&lt;br /&gt;
   - By separating system call execution from system call invocation, the system can now have flexible system call scheduling.&lt;br /&gt;
        o This allows system calls to be executed in batches, increasing the temporal locality of execution.&lt;br /&gt;
        o Also provides a way to execute system calls on a separate core, in parallel to user-mode thread execution. This provides spatial per-core locality.&lt;br /&gt;
        o An additional side effect is that now a multi-core system can have individual cores designated to run either user-mode or kernel mode execution dynamically depending on the current system load.&lt;br /&gt;
   - In order to implement the exception-less system calls, the research team suggests adding a new M-on-N threading package.&lt;br /&gt;
        o M user-mode threads executing on N kernel-visible threads.&lt;br /&gt;
        o This would allow the threading package to harvest independent system calls, by switching threads, in user-mode, whenever a thread invokes a system call.&lt;br /&gt;
The (Real) Cost of System Calls&lt;br /&gt;
   - Traditional way to measure the performance cost of system calls is the mode switch time. This is the time necessary to execute the system call instruction in user-mode, resume execution in kernel mode and&lt;br /&gt;
 then return execution back to the user-mode.&lt;br /&gt;
   - Mode switch in modern processors is a processor exception.&lt;br /&gt;
        o Flush the user-mode pipeline, save registers onto the kernel stack, change the protection domain and redirect execution to the proper exception handler.&lt;br /&gt;
   - Another measure of the performance of a system call is the state pollution caused by the system call.&lt;br /&gt;
        o State pollution is the measure of how much user-mode data is overwritten in places like the TLB, cache (L1, L2, L3), branch prediction tables with kernel leel execution instructions for the system call. &lt;br /&gt;
        o This data must be re-populated upon the return to user-mode.&lt;br /&gt;
   - Potentially the most significant measure of cost of system calls is the performance impact on a running application.&lt;br /&gt;
        o Ideally, user-mode instructions per cycle should not decrease as a result of a system call.&lt;br /&gt;
        o Synchronous system calls do cause a drop in user-mode IPC  due to; direct overhead -  the processor exception associated with the system call which flushes the processor pipeline; and indirect overhead&lt;br /&gt;
 – system call pollution on processors structures.&lt;br /&gt;
Exception-less System calls:&lt;br /&gt;
   - System call batching&lt;br /&gt;
        o By delaying a series of system calls and executing them in batches you can minimize the frequency of mode switches between user and kernel mode.&lt;br /&gt;
        o Improves both the direct and indirect cost of system calls.&lt;br /&gt;
   - Core specialization&lt;br /&gt;
        o A system call can be scheduled on a different core then the core on which it was invoked, only for exception-less system calls.&lt;br /&gt;
        o Provides ability to designate a core to run all system calls.&lt;br /&gt;
   - Exception-less Syscall Interface&lt;br /&gt;
        o Set of memory pages shared between user and kernel modes. Referred to as Syscall pages.&lt;br /&gt;
        o User-space threads find a free entry in a syscall page and place a request for a system call. The user-space thread can then continue executing without interruption and must then return to the syscall&lt;br /&gt;
 page to get the return value from the system call.&lt;br /&gt;
        o Neither issuing the system call (via the syscall page) nor getting the return value generate an exception.&lt;br /&gt;
   - Syscall pages&lt;br /&gt;
        o Each page is a table of syscall entries.&lt;br /&gt;
        o Each syscall entre has a state:&lt;br /&gt;
                 Free – means a syscall can be added her&lt;br /&gt;
                 Submitted – means the kernel can proceed to invoke the appropriate system call operations.&lt;br /&gt;
                 Done – means the kernel is finished and has provided the return value to the syscall entry. User space thread must return and get the return value from the page.&lt;br /&gt;
   - Decoupling Execution from Invocation&lt;br /&gt;
        o To separate these two concepts a special kernel thread, syscall thread, is used.&lt;br /&gt;
        o Sole purpose is to pull requests from syscall pages and execute them always in kernel mode.&lt;br /&gt;
        o Syscall threads provide the ability to schedule the system calls on specific cores.&lt;br /&gt;
System Calls Galore – FlexSC-Threads&lt;br /&gt;
   - Programming for exception-less system calls requires a different and more complex way of interacting with the kernel for OS functionality.&lt;br /&gt;
        o The researchers describe working with exception-less system calls as being similar to event-driven programming in that you do not get the same sequential execution of code as you do with synchronous&lt;br /&gt;
 system calls.&lt;br /&gt;
        o In event-driven servers, the researchers suggest using a hybrid of both exception-less system calls (for performance critical paths) and regular synchronous system calls (for less critical system calls).&lt;br /&gt;
FlexSC-Threads&lt;br /&gt;
   - Threading package which transforms synchronous system calls into exception-less system calls.&lt;br /&gt;
   - Intended use is with server-type applications with which have many user-mode threads (like Apache or MySQL).&lt;br /&gt;
   - Compatible with both POSIX threads and the default Linux thread library.&lt;br /&gt;
        o As a result, multi-threaded Linux programs are immediately compatible with FlexSC threads without modification.&lt;br /&gt;
   - For multi-core systems, a single kernel level thread is created for each core of the system. Multiple user-mode threads are multiplexed onto each kernel level thread via interactions with the syscall pages.&lt;br /&gt;
        o The syscall pages are private to each kernel level thread, this means each core of a system has a syscall page from which it will receive system calls.&lt;br /&gt;
Overhead:&lt;br /&gt;
   - When running a single exception-less system call against a single synchronous system call, the exception-less call was slower.&lt;br /&gt;
   - When running a batch of exception-less system calls compared to a bunch of synchronous system calls, the exception-less system calls were much faster.&lt;br /&gt;
   - The same is true for a remote server situation, one synchronous call is much faster than one exception-less system call but a batch of exception-less system calls is faster than the same number&lt;br /&gt;
 of synchronous system calls.&lt;br /&gt;
Related Work:&lt;br /&gt;
   - System Call Batching&lt;br /&gt;
        o Operating systems have a concept called multi-calls which involves collecting multiple system calls and submitting them as a single system call.&lt;br /&gt;
        o The Cassyopia compiler has an additional process called a looped multi-call where the result of one system call can be fed as an argument to another system call in the same multi-call.&lt;br /&gt;
        o Multi-calls do not investigate parallel execution of system calls, nor do they address the blocking of system calls like exception-less system calls do.&lt;br /&gt;
                 Multi-call system calls are executed sequentially, each one must complete before the next may start.&lt;br /&gt;
   - Locality of Execution and Multicores&lt;br /&gt;
        o Other techniques include Soft Timers and Lazy Receiver Processing which try to tackle the issue of locality of execution by handling device interrupts. They both try to&lt;br /&gt;
 limit processor interference associated with interrupt handling without affecting the latency of servicing requests.&lt;br /&gt;
        o Computation Spreading is another locality process which is similar to FlexSC.&lt;br /&gt;
                 Processor modifications that allow hardware migration of threads and migration to specialized cores.&lt;br /&gt;
                 Did not model TLBs and on current hardware synchronous thread migration is a costly interprocessor interrupt.&lt;br /&gt;
        o Also have proposals for dedicating CPU cores to specific operating system functionality.&lt;br /&gt;
                 These solutions require a microkernel system.&lt;br /&gt;
                 Also FlexSC can dynamically adapt the proportion of cores used by the kernel or cores shared by user and kernel execution dynamically.&lt;br /&gt;
   - Non-blocking Execution&lt;br /&gt;
        o Past research on improving system call performance has focused on blocking versus non-blocking behaviour.&lt;br /&gt;
                 Typically researchers used threading, event-based (non-blocking) and hybrid systems to obtain high performance on server applications.&lt;br /&gt;
        o Main difference between past research and FlexSC is that none of the past proposals have decoupled system call execution from system call invocation.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 04:03, 20 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=5172</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=5172"/>
		<updated>2010-11-17T22:17:55Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: /* Who is working on what ? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group 3 Essay=&lt;br /&gt;
&lt;br /&gt;
Hello everyone, please post your contact information here:&lt;br /&gt;
&lt;br /&gt;
Ben Robson [mailto:brobson@connect.carleton.ca brobson@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Rey Arteaga: rarteaga@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Corey Faibish: [mailto:corey.faibish@gmail.com corey.faibish@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Tawfic Abdul-Fatah: [mailto:tfatah@gmail.com tfatah@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Fangchen Sun: [mailto:sfangche@connect.carleton.ca sfangche@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Mike Preston: [mailto:michaelapreston@gmail.com michaelapreston@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Wesley L. Lawrence: [mailto:wlawrenc@connect.carleton.ca wlawrenc@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Can&#039;t access the video without a login as we found out in class, but you can listen to the speech and follow with the slides pretty easily, I just went through it and it&#039;s not too bad. Rarteaga&lt;br /&gt;
&lt;br /&gt;
==Question 3 Group==&lt;br /&gt;
*Abdul-Fatah Tawfic tafatah&lt;br /&gt;
*Arteaga Reynaldo rarteaga&lt;br /&gt;
*Faibish Corey   cfaibish&lt;br /&gt;
*Lawrence Wesley wlawrenc&lt;br /&gt;
*Preston Mike    mpreston&lt;br /&gt;
*Robson  Benjamin brobson&lt;br /&gt;
*Sun     Fangchen sfangche&lt;br /&gt;
&lt;br /&gt;
==Who is working on what ?==&lt;br /&gt;
Just to keep track of who&#039;s doing what --[[User:Tafatah|Tafatah]] 01:37, 15 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=5171</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=5171"/>
		<updated>2010-11-17T22:16:40Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: /* Who is working on what ? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group 3 Essay=&lt;br /&gt;
&lt;br /&gt;
Hello everyone, please post your contact information here:&lt;br /&gt;
&lt;br /&gt;
Ben Robson [mailto:brobson@connect.carleton.ca brobson@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Rey Arteaga: rarteaga@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Corey Faibish: [mailto:corey.faibish@gmail.com corey.faibish@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Tawfic Abdul-Fatah: [mailto:tfatah@gmail.com tfatah@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Fangchen Sun: [mailto:sfangche@connect.carleton.ca sfangche@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Mike Preston: [mailto:michaelapreston@gmail.com michaelapreston@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Wesley L. Lawrence: [mailto:wlawrenc@connect.carleton.ca wlawrenc@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Can&#039;t access the video without a login as we found out in class, but you can listen to the speech and follow with the slides pretty easily, I just went through it and it&#039;s not too bad. Rarteaga&lt;br /&gt;
&lt;br /&gt;
==Question 3 Group==&lt;br /&gt;
*Abdul-Fatah Tawfic tafatah&lt;br /&gt;
*Arteaga Reynaldo rarteaga&lt;br /&gt;
*Faibish Corey   cfaibish&lt;br /&gt;
*Lawrence Wesley wlawrenc&lt;br /&gt;
*Preston Mike    mpreston&lt;br /&gt;
*Robson  Benjamin brobson&lt;br /&gt;
*Sun     Fangchen sfangche&lt;br /&gt;
&lt;br /&gt;
==Who is working on what ?==&lt;br /&gt;
Just to keep track of who&#039;s doing what --[[User:Tafatah|Tafatah]] 01:37, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
It might be useful to point out that there are 7 sections in the article (Intro, 5 points, Conclusion), as well as 7 people in the group, easy division of topics -- [[User:wlawrenc|Wesley]]&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=5058</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=5058"/>
		<updated>2010-11-16T15:58:28Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group 3 Essay=&lt;br /&gt;
&lt;br /&gt;
Hello everyone, please post your contact information here:&lt;br /&gt;
&lt;br /&gt;
Ben Robson [mailto:brobson@connect.carleton.ca brobson@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Rey Arteaga: rarteaga@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Corey Faibish: [mailto:corey.faibish@gmail.com corey.faibish@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Tawfic Abdul-Fatah: [mailto:tfatah@gmail.com tfatah@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Fangchen Sun: [mailto:sfangche@connect.carleton.ca sfangche@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Mike Preston: [mailto:michaelapreston@gmail.com michaelapreston@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Wesley L. Lawrence: [mailto:wlawrenc@connect.carleton.ca wlawrenc@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Can&#039;t access the video without a login as we found out in class, but you can listen to the speech and follow with the slides pretty easily, I just went through it and it&#039;s not too bad. Rarteaga&lt;br /&gt;
&lt;br /&gt;
==Question 3 Group==&lt;br /&gt;
*Abdul-Fatah Tawfic tafatah&lt;br /&gt;
*Arteaga Reynaldo rarteaga&lt;br /&gt;
*Faibish Corey   cfaibish&lt;br /&gt;
*Lawrence Wesley wlawrenc&lt;br /&gt;
*Preston Mike    mpreston&lt;br /&gt;
*Robson  Benjamin brobson&lt;br /&gt;
*Sun     Fangchen sfangche&lt;br /&gt;
&lt;br /&gt;
==Who is working on what ?==&lt;br /&gt;
Just to keep track of who&#039;s doing what --[[User:Tafatah|Tafatah]] 01:37, 15 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3381</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3381"/>
		<updated>2010-10-13T22:15:46Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: /* Tabulated Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system. As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 21:15, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3379</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3379"/>
		<updated>2010-10-13T22:13:17Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: /* Linux Schedulers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system. As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 21:15, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3377</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3377"/>
		<updated>2010-10-13T22:13:05Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: /* Linux Schedulers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system. As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 21:15, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct,)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3373</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3373"/>
		<updated>2010-10-13T22:02:11Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: /* Overview &amp;amp; History */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system. As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 21:15, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3372</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3372"/>
		<updated>2010-10-13T22:01:49Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: /* Overview &amp;amp; History */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system. As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 21:15, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
(In Progress)&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3371</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3371"/>
		<updated>2010-10-13T22:00:50Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: /* Overview &amp;amp; History */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system. As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 21:15, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
(In Progress)&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various feature to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3368</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3368"/>
		<updated>2010-10-13T21:54:36Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: /* Overview &amp;amp; History */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system. As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 21:15, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
(In Progress)&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including (list here). A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3366</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3366"/>
		<updated>2010-10-13T21:52:12Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: /* Overview &amp;amp; History */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system. As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 21:15, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
(In Progress)&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and equal scheduler. Various methods and concepts have been tried over different versions to get this fair scheduler, including (list here), while still maintaining a fast run time.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3361</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3361"/>
		<updated>2010-10-13T21:48:29Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: /* Overview &amp;amp; History */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system. As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 21:15, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=2415</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=2415"/>
		<updated>2010-10-06T23:03:16Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: /* Resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
&lt;br /&gt;
I found some resources, which might be useful to answer this question. As far as I know, FreeBSD uses a Multilevel feeback queue and Linux uses in the current version the completly fair scheduler.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Some text about FreeBSD-scheduling http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-ULE Thread Scheduler: http://www.scribd.com/doc/3299978/ULE-Thread-Scheduler-for-FreeBSD&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Completly Fair Scheduler: http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Sebastian&lt;br /&gt;
&lt;br /&gt;
Also found a nice link with regards to the new Linux Scheduler for those interested:&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-scheduler/&lt;br /&gt;
&amp;lt;br /&amp;gt;It is also referred to as the O(1) scheduler in algorithmic terms (CFS is O(log(n)) scheduler). Both have been in development by Ingo Molnár.&lt;br /&gt;
-Abhinav&lt;br /&gt;
&lt;br /&gt;
Some more resources;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html (includes history of Linux scheduler from 1.2 to 2.6)&amp;lt;br /&amp;gt;&lt;br /&gt;
http://my.opera.com/blu3c4t/blog/show.dml/1531517 &amp;lt;br /&amp;gt;&lt;br /&gt;
-Wes&amp;lt;br /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=2414</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=2414"/>
		<updated>2010-10-06T23:01:49Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: /* Resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
&lt;br /&gt;
I found some resources, which might be useful to answer this question. As far as I know, FreeBSD uses a Multilevel feeback queue and Linux uses in the current version the completly fair scheduler.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Some text about FreeBSD-scheduling http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-ULE Thread Scheduler: http://www.scribd.com/doc/3299978/ULE-Thread-Scheduler-for-FreeBSD&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Completly Fair Scheduler: http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Sebastian&lt;br /&gt;
&lt;br /&gt;
Also found a nice link with regards to the new Linux Scheduler for those interested:&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-scheduler/&lt;br /&gt;
&amp;lt;br /&amp;gt;It is also referred to as the O(1) scheduler in algorithmic terms (CFS is O(log(n)) scheduler). Both have been in development by Ingo Molnár.&lt;br /&gt;
-Abhinav&lt;br /&gt;
&lt;br /&gt;
Found some resources also;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&amp;lt;br /&amp;gt;&lt;br /&gt;
http://my.opera.com/blu3c4t/blog/show.dml/1531517 &amp;lt;br /&amp;gt;&lt;br /&gt;
-Wes&amp;lt;br /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=2413</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=2413"/>
		<updated>2010-10-06T23:01:25Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: /* Resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
&lt;br /&gt;
I found some resources, which might be useful to answer this question. As far as I know, FreeBSD uses a Multilevel feeback queue and Linux uses in the current version the completly fair scheduler.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Some text about FreeBSD-scheduling http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-ULE Thread Scheduler: http://www.scribd.com/doc/3299978/ULE-Thread-Scheduler-for-FreeBSD&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Completly Fair Scheduler: http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Sebastian&lt;br /&gt;
&lt;br /&gt;
Also found a nice link with regards to the new Linux Scheduler for those interested:&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-scheduler/&lt;br /&gt;
&amp;lt;br /&amp;gt;It is also referred to as the O(1) scheduler in algorithmic terms (CFS is O(log(n)) scheduler). Both have been in development by Ingo Molnár.&lt;br /&gt;
-Abhinav&lt;br /&gt;
&lt;br /&gt;
Found some resources also;&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
http://my.opera.com/blu3c4t/blog/show.dml/1531517 &lt;br /&gt;
-Wes&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Lab_2_2010&amp;diff=2269</id>
		<title>COMP 3000 Lab 2 2010</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Lab_2_2010&amp;diff=2269"/>
		<updated>2010-09-28T15:01:31Z</updated>

		<summary type="html">&lt;p&gt;Wlawrenc: /* Dynamic Libraries */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The following lab requires access to a relatively standard Linux environment.  Use a virtual machine version of Ubuntu or log in to one of the SCS Linux machines.  Alternately, use your own Linux installation (installed natively or in a virtual machine).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Note - this lab is not yet finalized!&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Part A (Mandatory)=&lt;br /&gt;
&lt;br /&gt;
==The Shell==&lt;br /&gt;
&lt;br /&gt;
The shell or command line provides a text interface for running programs.  While not as visually pleasing as a graphical interface, the shell provides a more clear representation of the functionality provided by the operating system.&lt;br /&gt;
&lt;br /&gt;
To run a program contained in the current directory in the shell, you need to prefix the name of the command with a &amp;lt;tt&amp;gt;./&amp;lt;/tt&amp;gt;.  This &amp;quot;./&amp;quot; tells the shell that the location of the command you wish to run is the current directory.  By default, the shell will not search for executable commands in the current working directory.  To run most system commands, the name of the command can be typed without a path specification.&lt;br /&gt;
&lt;br /&gt;
===Help===&lt;br /&gt;
&lt;br /&gt;
#When working in the shell, help is available on most programs in the system, especially those that are command line based.  This system of help is available by using the &amp;lt;tt&amp;gt;man&amp;lt;/tt&amp;gt; command.  If one wanted to get help on the echo command, the associated command would be &amp;lt;tt&amp;gt;man echo&amp;lt;/tt&amp;gt;.  What does &amp;lt;tt&amp;gt;man&amp;lt;/tt&amp;gt; stand for?&lt;br /&gt;
A: echo deisplys a line of text&lt;br /&gt;
#The &amp;lt;tt&amp;gt;man&amp;lt;/tt&amp;gt; command can also be used to get help on standard C functions.  Briefly (in one line), what does the C function &amp;lt;tt&amp;gt;brk&amp;lt;/tt&amp;gt; do?&lt;br /&gt;
A: Change the location of the program break, which defines the end of the process&#039; data segment.&lt;br /&gt;
# What does the &amp;lt;tt&amp;gt;more&amp;lt;/tt&amp;gt; command do?  &lt;br /&gt;
A: a filter for paging through text one screenful at a time.&lt;br /&gt;
# How do you quit the &amp;lt;tt&amp;gt;less&amp;lt;/tt&amp;gt; command?&lt;br /&gt;
A: allows backward movement in the file as well as forward movement&lt;br /&gt;
&lt;br /&gt;
===Shell Basics===&lt;br /&gt;
&lt;br /&gt;
# The &amp;lt;tt&amp;gt;which&amp;lt;/tt&amp;gt; command can be used to figure out what directory an executable program resides in.  Using this command, what directory contains the &amp;lt;tt&amp;gt;top&amp;lt;/tt&amp;gt; command?&lt;br /&gt;
# &amp;lt;tt&amp;gt;bash&amp;lt;/tt&amp;gt; is your default shell.  Which of the following is another shell that can be used instead of BASH?&amp;lt;tt&amp;gt; mv, cat, bc, tcsh, or arch &amp;lt;/tt&amp;gt;&lt;br /&gt;
# The &amp;lt;tt&amp;gt;ls&amp;lt;/tt&amp;gt; command can be used to get a listing of the files in a directory.  What options are passed to &amp;lt;tt&amp;gt;ls&amp;lt;/tt&amp;gt; to see all of the files within a directory (including hidden files)?&lt;br /&gt;
# The &amp;lt;tt&amp;gt;ps&amp;lt;/tt&amp;gt; command is used to get a list of processes which are running on the system.  Start a new shell from bash by running &amp;lt;tt&amp;gt;bash&amp;lt;/tt&amp;gt;.  Does the new shell replace or run in parallel with the old BASH shell?  What about &amp;lt;tt&amp;gt;exec bash&amp;lt;/tt&amp;gt;?  You can use the &amp;lt;tt&amp;gt;ps&amp;lt;/tt&amp;gt; command to determine if bash is still running.&lt;br /&gt;
# Because each process is started by some other process, there is a process tree structure on the system.  From within your bash terminal, determine what process started BASH by using the &amp;lt;tt&amp;gt;pstree&amp;lt;/tt&amp;gt; command.  What graphical application can give you the same information?&lt;br /&gt;
&lt;br /&gt;
==Processes==&lt;br /&gt;
&lt;br /&gt;
Each application running on a system is assigned a unique process identifier.  The &amp;lt;tt&amp;gt;ps&amp;lt;/tt&amp;gt; command shows the process identifiers for running processes.  Each process running on the system is kept separated from other processes by the operating system.  This information will be useful for subsequent questions.&lt;br /&gt;
&lt;br /&gt;
==Permissions==&lt;br /&gt;
&lt;br /&gt;
Your permission to access a file in Unix is determined by who you are logged in as.  All files on the Unix file system (including directories and other special files) have three different sets of permissions. The first set permissions denotes the allowed file operations for the owner of the file.  The second set of permissions denotes the allowed file operations for a group of users.  The third set of permissions denotes the allowed file operations for everyone else.  A&lt;br /&gt;
file is always owned by someone and is always associated with a group.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;tt&amp;gt;ls&amp;lt;/tt&amp;gt; command with the &amp;lt;tt&amp;gt;-l&amp;lt;/tt&amp;gt; option can be used to show both the permissions of a file as well as the owner and group associated with the file.  Permissions are listed first, followed by the owner and the group.&lt;br /&gt;
&lt;br /&gt;
#Who is the owner of the &amp;lt;tt&amp;gt;/etc&amp;lt;/tt&amp;gt; directory?&lt;br /&gt;
A: root&lt;br /&gt;
# What group is associated with the &amp;lt;tt&amp;gt;/etc/shadow&amp;lt;/tt&amp;gt; file?&lt;br /&gt;
A: shadow&lt;br /&gt;
# Each user is a member of some number of groups.  You can determine what groups you are part of by using the &amp;lt;tt&amp;gt;groups&amp;lt;/tt&amp;gt; command. Based on the groups listed, would you (the ``student&#039;&#039; user in the lab) be a member of the group associated with the &amp;lt;tt&amp;gt;/etc/shadow&amp;lt;/tt&amp;gt; file?&lt;br /&gt;
A: no&lt;br /&gt;
&lt;br /&gt;
==Environment==&lt;br /&gt;
&lt;br /&gt;
The environment on both Linux and Windows contains variable - value pairs which are useful to applications running on the system.  In Linux, these environment variables can be printed on the command line by referring to the variable name prefixed with a $ sign (eg: to output the value in the HELLO environment variable, one could write &amp;lt;tt&amp;gt;echo $HELLO&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
# On the command line, run the following two sets of commands.  Notice that the bash command will start a new shell separate from the shell that the HELLO environment variable was set in.&lt;br /&gt;
&lt;br /&gt;
 HELLO=&amp;quot;Hi There&amp;quot;&lt;br /&gt;
 bash&lt;br /&gt;
 echo $HELLO&lt;br /&gt;
 exit&lt;br /&gt;
&lt;br /&gt;
 export HELLO=&amp;quot;Hi There&amp;quot;&lt;br /&gt;
 bash&lt;br /&gt;
 echo $HELLO&lt;br /&gt;
 exit&lt;br /&gt;
&lt;br /&gt;
What does the export command seem to do?&lt;br /&gt;
&lt;br /&gt;
==Dynamic Libraries==&lt;br /&gt;
&lt;br /&gt;
Most applications on the system do not contain all the code that they need right within the executable.  Instead, dynamic libraries are loaded into the program address space when the program loads.  As an example, the standard C library, which contains such functions as &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; is loaded in at run-time.&lt;br /&gt;
&lt;br /&gt;
# Using &amp;lt;tt&amp;gt;ldd&amp;lt;/tt&amp;gt;, what dynamic library dependencies does the &amp;lt;tt&amp;gt;top&amp;lt;/tt&amp;gt; command have?&lt;br /&gt;
# In addition to the libraries listed as dependencies for the application &amp;lt;tt&amp;gt;top&amp;lt;/tt&amp;gt; by &amp;lt;tt&amp;gt;ldd&amp;lt;/tt&amp;gt;, there may be other libraries that the application loads dynamically at run-time.  Retrieve the process number &amp;lt;em&amp;gt;PID&amp;lt;/em&amp;gt; for the &amp;lt;tt&amp;gt;bash&amp;lt;/tt&amp;gt; process (using &amp;lt;tt&amp;gt;ps&amp;lt;/tt&amp;gt;) and examine the map file located at &amp;lt;tt&amp;gt;/proc/&amp;lt;/tt&amp;gt;&amp;lt;em&amp;gt;PID&amp;lt;/em&amp;gt;&amp;lt;tt&amp;gt;/maps&amp;lt;/tt&amp;gt;.  What other dynamic libraries have been loaded into the application while it has been running?&lt;br /&gt;
&lt;br /&gt;
=Part B (Optional)=&lt;br /&gt;
&lt;br /&gt;
==Processes==&lt;br /&gt;
&lt;br /&gt;
# What is the difference between the &amp;lt;tt&amp;gt;fork&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;exec&amp;lt;/tt&amp;gt; function in a UNIX environment?&lt;br /&gt;
# What is a zombie process?&lt;br /&gt;
# Give an example C program which creates a zombie process.  Note that &amp;lt;tt&amp;gt;BASH&amp;lt;/tt&amp;gt; by default will collect and destroy zombie processes and so you will need to avoid bash destroying the zombie process during debugging.  This can be done by delaying the parent exit (using sleep is one good way to do this).&lt;br /&gt;
# Perform the modifications to your program above to avoid creating a zombie process.  List the new program.&lt;br /&gt;
&lt;br /&gt;
==Permissions==&lt;br /&gt;
&lt;br /&gt;
# Permissions on Unix are grouped into three basic file operations. What are these file operations?&lt;br /&gt;
# What does it mean to have execute permission on a directory?&lt;br /&gt;
# What are the 6 basic file permissions within Windows?&lt;br /&gt;
# What is the difference between the write and modify file permission in Windows?&lt;br /&gt;
# Because all files are stored in a directory, under UNIX permission to delete, rename, and move files is determined by the users access rights on the directory the file is contained in.  What attribute on the directory prevents those who can modify a directory from deleting files (hint: help on the &amp;lt;tt&amp;gt;chmod&amp;lt;/tt&amp;gt; command may prove useful).&lt;br /&gt;
# In Windows, a file can be associated with more than one group and each group can have different access permissions.  On Unix, new groups can be created by the system administrator which are supersets of other groups.  Given this, is it possible to develop an access permission scenario which would be impossible to implement in Unix but possible to implement in Windows?  If yes, give an example.  If no, explain why.&lt;br /&gt;
&lt;br /&gt;
==Environment==&lt;br /&gt;
&lt;br /&gt;
# What does the PATH environment variable do?&lt;br /&gt;
# What environment variable tells X applications where to find the X server which it should communicate with to display the output?&lt;br /&gt;
&lt;br /&gt;
==Dynamic Libraries==&lt;br /&gt;
&lt;br /&gt;
Dynamic libraries allow one copy of executable code to be used by&lt;br /&gt;
many different processes on the system, without requiring that&lt;br /&gt;
multiple copies of the code be stored on disk in different files.&lt;br /&gt;
What are some problems that can arise when different programs use the&lt;br /&gt;
same common DLLs (hint: ``DLL Hell&#039;&#039;)?&lt;/div&gt;</summary>
		<author><name>Wlawrenc</name></author>
	</entry>
</feed>