<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Sschnei1</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Sschnei1"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Sschnei1"/>
	<updated>2026-04-11T04:25:42Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=6037</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=6037"/>
		<updated>2010-12-02T00:39:21Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Group Essay 2 =&lt;br /&gt;
&lt;br /&gt;
Hello Group. Please post your information here. I assume everybody read the email at your connect account. Anyone specific wants to send him the email with the group members inside? If not, I just go ahead tomorrow at about 13:00 and send the email with the group members who wrote their contact information in here. - [[User:Sschnei1|Sschnei1]] 03:25, 15 November 2010 (UTC)&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider sschnei1@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Matthew Chou mchou2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Mark Walts mwalts@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Henry Irving hirving@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Jean-Benoit Aubin jbaubin@connect.carleton.ca &lt;br /&gt;
&lt;br /&gt;
Pradhan Nishant npradhan npradhan@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Only Paul Cox didn&#039;t answer i sent this morning. &lt;br /&gt;
&lt;br /&gt;
Cox     Paul    pcox&lt;br /&gt;
&lt;br /&gt;
And I just sent an email to the teacher. &lt;br /&gt;
&lt;br /&gt;
--Jean-Benoit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Paper==&lt;br /&gt;
&lt;br /&gt;
 the paper&#039;s title, authors, and their affiliations. Include a link to the paper and any particularly helpful supplementary information.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Title:&#039;&#039;&#039; Accountable Virtual Machines&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039; Andreas Haeberlen, Paarijaat Aditya, Rodrigo Rodrigues, Peter Druschel&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Affiliates:&#039;&#039;&#039;&lt;br /&gt;
University of Pennsylvania, Max Planck Institute for Software Systems (MPI-SWS)]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Link to Paper:&#039;&#039;&#039; [http://www.usenix.org/events/osdi10/tech/full_papers/Haeberlen.pdf Accountable Virtual Machines]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Supplementary Information:&#039;&#039;&#039; [http://research.microsoft.com/en-us/people/sriram/druschel.pptx Accountable distributed systems and the accountable cloud] - background of similar AVM implementation for distributed systems.&lt;br /&gt;
&lt;br /&gt;
==Background Concepts==&lt;br /&gt;
&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountable Virtual Machine (AVM)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Deterministic Replay&#039;&#039;&#039;: A machine can record its executions into a file so that it can be replayed in order to see the executions and follow what was happening on the machine. Remus [[#References | [1]]] has contributed a highly efficient snap-shotting mechanism for these replays.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountability:&#039;&#039;&#039; Accountability in the context of this paper means that every action done on the virtual machine is recorded and will be used against the machine or user to verify the correctness of the application. The AVM is responsible of its action and will answers for its action against an auditor. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Remote Fault Detection:&#039;&#039;&#039; There are programs like GridCop[[#References | [2]]] that can be used to monitor the progress  and execution of a remotely executing program by requesting a beacon packet. When the remote computer is sending the packets, the receiving/logging computer must be a trusted computer (hardware,software, OS) so that the receiving of packets remains consistent. To detect a fault in a remote system, every packet must arrive safely, and any interrupts during the logging must be handled or the inconsistencies will result in an inaccurate outcome. The AVM does not require trusted hardware and can be used over wide-area networks.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cheat Detection:&#039;&#039;&#039; Cheating in games or any specific modification in a program can be either scanned[[#References | [3][4]]] for or prevented[[#References | [5][6]]] by certain programs. The issue with these scanning and preventative software is the knowledge/awareness of specific cheats or situations that the software can handle. An AVM is designed to counter any kind of general cheat.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Integrity Violations:&#039;&#039;&#039; This refers how the consistency of normal/expected operations of an execution does not equal to that of the host/reference (Trusted) execution, hence a violation has occurred.&lt;br /&gt;
&lt;br /&gt;
- The word &amp;quot;node&amp;quot; is used to refer to a computer or server in order to represent the interactions between one computer and another, or a computer and a server.&lt;br /&gt;
&lt;br /&gt;
==Research problem== &lt;br /&gt;
&lt;br /&gt;
 What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
**Possible alternative  for the first part : &lt;br /&gt;
&lt;br /&gt;
The research presented in this paper tries to tackle a problem that has haunted computer scientists for a long time. How can you be sure that the software running on a remote machine is working correctly or as intended. Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a trust relation between users and a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done would be independent of the node and only dependent on the intended software. Let&#039;s say, that node A interacts with node B with execution exe1 and node A interacts with node C also with ex1, but node C has been modified and respond with exe2. Thus, we can assume that the respond of B and C will be different. Being able to prove that the node C has been modified without any doubt is the purpose of this paper. &lt;br /&gt;
***Let me know what you think about it. I removed the redundant part, and I think made it clearer and more concise. [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
** looks good to me, we&#039;ll put this part into the final essay instead of mine below --[[User:Mchou2|Mchou2]] 20:03, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
/// omit&lt;br /&gt;
&lt;br /&gt;
Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a system of trust between users and a host. These different examples must have a certain amount of trust between the interactions of one user and another, as well as the user interacting with a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done with node A is the same it would be done with another node, node B. Let&#039;s say for example that node A interacts with node B with execution exe1, now when node A and B interact with node C, they would both expect to interact with execution exe1, but what happens if node C interacts differently and executes with exe2, then it would be beneficial to be notified of this difference. The previous explanation might not seem too relevant without some examples, such as; Node A is playing a game with node B, the game executed on node B is the same as on A, now when node A plays with node C, node C is executing the same operations as node A plus a cheating program; when node A buys some products from node B&#039;s server, the server processes the order and then deletes node A&#039;s sensitive information, denoted by execution 1, now when node A buys from node C&#039;s server, the order is processed as well as the sensitive information that node A has provided is also rerouted to another server so that it can be used without permission. These are only a few examples where the operations in an execution is necessary to be logged and verified. The problem that is trying to be handled here is to create a procedure that can be done so that a node can be known as accountable, and to log the operations in an execution to provide evidence of these faults done by a node. &lt;br /&gt;
&lt;br /&gt;
////&lt;br /&gt;
&lt;br /&gt;
Previous work that has been done in efforts to prevent or detect integrity violations can be separated into different categories of operations. The first would be Cheat Detection, where in many different games there are cheats that users use to usually create benefits for themselves that was not intended by the original game.[[#References |[4]]] These detectors are not dynamic, in the sense that they do not actually detect whether a cheat is being used, more so they are checking if there is a cheating operation that they have logged before, being operated on the user&#039;s system. For example, if there was a known cheating program named aimbot.exe that can be run in the background of a game such as CounterStrike, and the PunkBuster system that was implemented on the user&#039;s system had the aimbot.exe program already logged as a cheating program from the developers, the PunkBuster program might notify the current game servers of this or even prevent the user from playing any games until the aimbot.exe operation is no longer running. &lt;br /&gt;
&lt;br /&gt;
Accountability is another important problem that many have already worked on. The main goal of an accountable system is to be able to determine without a doubt that node is faulty and can prove it with solid evidence. It can also be used to defend a node when threatened with false accusation. Numerous systems already use accountability in their system, but they were mostly all linked to specific applications, where a point of reference must be used to compare. As example PeerReview[[#References |[7]]], which is a system closely related to what the research team have worked on,   must be implemented into the application which makes it less portable and cannot be implemented as easily as an AVM. PeerReview verifies the inbound and outbound packets and can see if the software is running as intended. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another problem that is related to the paper is remote fault detection in a distributed system. How can we determine if a remote node is running the code correctly or if the machine itself is working as intended. Network activity is a common solution to this problem, as they look at the inbound and outbound of the node. This can let them know how the software is operating, or in the case of AVM how the whole virtual machine is working. Gridcop[[#References |[8]]] is another example that inspects a small number of packets periodically.  Another way of determining the fault remotely is to use a trusted node,  where it can tell immediately if a fault occurs or a modification is made where it should not have been made. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-and anything else you would to add or modify, or leave a note in the discussion sections if you want me to relook or change something. --[[User:Mchou2|Mchou2]] 20:10, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The problem of logging and auditing the processes of an execution of a specific node (computer) is greatly dependent on the work done for deterministic replay. Deterministic replay programs can create a log file that can be used to replay the operations done for some execution that occurs on a node. Replaying the operations done on the node can show what the node was doing, and this would seem like it is sufficient in finding out whether a node was causing integrity violations or not. The concept of snap-shoting/recording the operations is not the issue with deterministic replay, it is the fact that the data being outputted into the replay may be tampered with by the node itself so that it generates optimal results in replay. By faking the results of the operations, the auditing computer will falsely believe that the tested computer is running all operations as normal. The logging operations done by these recording programs can be directly related to the work needed to detect integrity violations.&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
&lt;br /&gt;
 What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The accountable virtual machine (AVM), that was proposed in this essay, most useful contribution was the implementation of the accountable virtual machine monitor (AVMM). It is what allows for the fault checking of virtual machines in a cloud computing environment. The AVMM can be broken down into different parts: the virtual machine monitor (VMM), the temper-evident log, and auditing mechanisms.  The VMM is based off the VMM found in VMWare Workstation 6.5.1[[#References |[9]]], the temper-evident log was adapted from code in PeerReview[[#References |[7]]], and the audit tools were built up from scratch. &lt;br /&gt;
&lt;br /&gt;
The accountable virtual machine monitor relies on four assumptions:&lt;br /&gt;
&lt;br /&gt;
1. All transmitted messages are received, retransmitted if needed.&lt;br /&gt;
&lt;br /&gt;
2. Machines and Users have access to a hash function that is pre-image resistant, second pre-image resistant, and collision resistant.&lt;br /&gt;
&lt;br /&gt;
3. All parties have a certified keypair, that can be used to sign messages.&lt;br /&gt;
&lt;br /&gt;
4. To audit a log, the user has a reference copy of the VM used.&lt;br /&gt;
The job of the AVMM is to record all incoming and outgoing messages to a tamper-evident log&lt;br /&gt;
and enough info of the execution to enable deterministic replay. &lt;br /&gt;
&lt;br /&gt;
The AVMM must record nondeterministic inputs (such as hardware interrupts), because the input is asynchronous, and the exact timing of input must be recorded so the inputs can be  injected at the same moment during the replay. Wall-clock time is not accurate enough for this recording, so the AVMM must use a combination of instruction pointer, branch counter, and additional registers. Not all inputs have to be recorded this way (software interrupts) because they send requests to the AVM, which will be issued again during replay.     &lt;br /&gt;
&lt;br /&gt;
Two parallels streams appear in the tamper-evident log: message exchanges and nondeterministic inputs. &lt;br /&gt;
It is important for the AVMM to detect inconsistencies between the user&#039;s log and the machine&#039;s log (in case of foul play), so the AVMM simply cross-references messages and inputs during replay, thus, easily detecting any discrepancies.&lt;br /&gt;
&lt;br /&gt;
The AVMM periodically takes snapshots of the AVM&#039;s current state, this facilitates fine-grain audits for the user, but it also increases overhead. The overhead is lowered slightly by the snapshots being incremental (only save the state that has been changed since the last snapshot). The user can authenticate the snapshot using a hash tree of the state (generated by the AVMM) and it can update the hash tree after each snapshot.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tamper-Evident Log&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The log is made up of hash code entries.&lt;br /&gt;
Each log entry in form e = (s,t,c,h)&lt;br /&gt;
s = monotonically increasing sequence number&lt;br /&gt;
t = type&lt;br /&gt;
c = data of the type&lt;br /&gt;
h = hash value&lt;br /&gt;
&lt;br /&gt;
The hash value is calculated by: h = H(hi-1 || s || t || H(c))&lt;br /&gt;
H() is a hash function.&lt;br /&gt;
|| stands for concatenation&lt;br /&gt;
&lt;br /&gt;
Each message sent gets signed with a private key, when the AVMM logs the messages with the signature attached but removes it before sending it to the AVM.   To ensure nonrepudiation, an authenticator is attached to each outgoing message.&lt;br /&gt;
&lt;br /&gt;
To detect when a message is dropped, each party sends an acknowledgement for each message they receive. If an acknowledgement is not received the message is resent a few times, if the user stops receiving messages, then the machine is presumed to have failed.&lt;br /&gt;
&lt;br /&gt;
To preform a log check, the user retrieves a pair of authenticators, then challenges the machine to produce the log segment between the two. The log is computationally infeasible to edit without breaking the hash chain, thus, if the log has been tampered with, the hash chain will be different and the user will notified of the tampering.&lt;br /&gt;
&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Auditing Mechanism&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
From VMM&#039;s perspective all things are deterministic.&lt;br /&gt;
&lt;br /&gt;
To perform a audit, the user:&lt;br /&gt;
&lt;br /&gt;
1. obtains a segment of the machine&#039;s log and the authenticators&lt;br /&gt;
&lt;br /&gt;
2. downloads a snapshot of the AVM at the beginning of the segment&lt;br /&gt;
&lt;br /&gt;
3. replays the entire segment, starting from the snapshot, to verify the events in the log are the correct execution of the software.&lt;br /&gt;
&lt;br /&gt;
The user can verify the execution of software through three different methods: Verifying the log, snapshot, and execution.&lt;br /&gt;
&lt;br /&gt;
When the user wants to verify a log segment, the user retrieves the authenticators from the machine with the sequence numbers in the range of the log segment. The user then downloads the log segment from the machine, and, starting with the most recent snapshot before the log segment and ending with the most recent snapshot before the end of the log segment. The user then checks the authenticators for tampering. If this step proceeds, the user can assume the log segment executed properly. If the machine is faulty, the segment will be unavailable to download or may return a corrupted log segment. This can be used to convince a third party of the fault.&lt;br /&gt;
&lt;br /&gt;
When the user wants to verify the snapshot, the user obtains a snapshot of the AVM&#039;s state at the beginning of the log segment. The user then downloads a snapshot from the machine and the AVMM recomputes the hash tree. The new hash tree is compared to the hash tree contained in the orignal log segment. If any discrepancies are detected, the user can use this to convince a third party of the machine&#039;s faults.&lt;br /&gt;
&lt;br /&gt;
In order for the user to verifying the execution of a log segment, the user needs three inputs: the log segment, the snapshot, and the public keys of the machine and any users of the machine. The auditing tool performs two checks on the log segment, a syntactic check (determines if log is well-formed), and a semantic check (determines if the information in the log shows the correct execution of the machine).&lt;br /&gt;
&lt;br /&gt;
The syntactic check checks whether all log entries are in the proper format, the signatures in each message and acknowledgement, if each message was acknowledged, and the sequence of sent and received messages is correct when compared to the sequence of messages that enter and exit the AVM.&lt;br /&gt;
&lt;br /&gt;
The semantic check creates a local VM that will execute the machine&#039;s log segment, the VM is initialized with a snapshot from the machine if possible. The local VM then runs the log segment and the data is recorded. The auditing tool then checks the log segments, inputs, outputs, and verification of snapshot hashes of the replayed execution against the original log. If any discrepancies are detected then the fault is reported and can be used as evidence against the machine.&lt;br /&gt;
&lt;br /&gt;
Why is it better?&lt;br /&gt;
[To Do]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I read through it and fixed a few missing letters here and there, so if someone else could read it as well and then sign under me we can probably move it to the essay. Thanks . --[[User:Mchou2|Mchou2]] 23:53, 25 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-I just read it and fixed some small parts. Looks good. --[[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// first part of my writing; this is just part1 [[User:Sschnei1|Sschnei1]] 00:35, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The layout of the paper is primordial for the comprehension of the reader. The introduction clearly describes what the reader has to expect in the following pages, especially what problems are addressed and how they are solved. &lt;br /&gt;
&lt;br /&gt;
This paper gives multiple examples about advantages and disadvantages in an AVM. A good example is &amp;quot;Cheat Detection&amp;quot;. Cheaters use programs to go around the original game code to gain an major advantage over other players. Since an AVM is generic in cheat detection it has a wider support for detecting cheats than most of the other cheat detection algorithms. The logs give the game the function to replay the game. Thus, players using AVM can see the way other players play by replaying the game with the player&#039;s log.&lt;br /&gt;
&lt;br /&gt;
The negative side is that the player might have to suffer from the AVM. Everything is being logged and stored on the hard drive, which takes a lot amount of space. In the example in the paper it is 148mb per hour after compression. This reduces the fps. Additionally, the connection to the AVM increases the ping time to the server. &lt;br /&gt;
&lt;br /&gt;
As a proof of concept, they used their AVM in the online game Counter Strike and tried to detect online cheats. They were using “Dell Precision T1500 workstations, with 8 GB of memory and 2.8 GHz Intel Core i7 860 CPUs”[pg 10]. These machines are considerably more high powered than the system requirements of Counter-Strike, which are “500 MHz processor, 96 MB RAM”[10]. A 10 year old game [10] should use fewer resources on a Dell Precision T1500 workstations. In comparison, newer games consume far more resources than Counter-Strike giving it less room to run the AVM. A 13% slowdown [pg 12.] in a game where you are only getting 30 to 40 fps is a pretty noticeable slowdown. This is very detrimental to the game play because having over 60fps is the optimal performance.&lt;br /&gt;
&lt;br /&gt;
In the paper the authors state that the AVM will only generate an extra 5ms of latency. While this does not seem like a lot the measurement was taken over a LAN with all the computers connected to the same switch [pg. 12]. This sample does not accurately represent real life situations and therefore lacks external validity, since many of these online games are played over the internet with the participants sometimes not even on the same continent; the latency overhead of the AVM would certainly increase due to the added distance. [12]&lt;br /&gt;
&lt;br /&gt;
Additional Critiques:&lt;br /&gt;
&lt;br /&gt;
While the paper does test a slightly larger than one to one scenario, it certainly does not test in a real world environement where 16,32 or even 64 players would be playing in the sametime.  &lt;br /&gt;
&lt;br /&gt;
Spot checking can be used for applications that require snapshots every x seconds. Even if this way remove a lot of overhead and data storage, it only verify if the applications or user is working as intended every x second. Thus, someone could find the patern of those snapshots and render the AVM inutile.  &lt;br /&gt;
&lt;br /&gt;
AVM&#039;s are extremely effective against two types of cheating, that which gives incorrect networking messages and the one that has to be loaded with the game. This is the perfect world for tournaments competition type of game, but in a real world this wouldn&#039;t be of much use. Games get patched, users download add-ons for the game, etc. Every patch or add-ons would require a new AVM which is unreasonable for the amount of people playing the game. A solution brought from the team was to disable the right to install anything on the AVM. As this could work in a tournament environment, a normal users at home would not be pleased with this limitation. &lt;br /&gt;
&lt;br /&gt;
An AVM&#039;s will not in any way catch any bug or exploit in a program that a malicious user could exploit, as the exploit would appear on both user/monitor systems and perform the same.&lt;br /&gt;
&lt;br /&gt;
// more Critiques&lt;br /&gt;
&lt;br /&gt;
For their use case, the authors did not state that in counterstrike the user can record a demo of his current game. Some online playing leagues require every player to record his own demo and upload it to the website, where every person in the league can watch it. Without this demo the team lost the match immediately. &lt;br /&gt;
Additionally, some leagues require the player to start an extra program (e.g. Electronic Sports League WIRE), which checks the programs running in the background. It also takes random snapshots of the current player and compresses all information into a file and uploads it to one of the server in the online league, where it can be checked by any player.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
 You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
[1] B. Cully, G. Lefebvre, D. Meyer, M. Feeley, N. Hutchinson, and&lt;br /&gt;
A. Warfield. Remus: High availability via asynchronous virtual&lt;br /&gt;
machine replication. In Proceedings of the USENIX Symposium&lt;br /&gt;
on Networked Systems Design and Implementation (NSDI), Apr.&lt;br /&gt;
2008.&lt;br /&gt;
&lt;br /&gt;
[2] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
[3] G. Hoglund. 4.5 million copies of EULA-compliant spyware.&lt;br /&gt;
http://www.rootkit.com/blog.php?newsid=358.&lt;br /&gt;
&lt;br /&gt;
[4] PunkBuster web site. http://www.evenbalance.com/.&lt;br /&gt;
&lt;br /&gt;
[5] N. E. Baughman, M. Liberatore, and B. N. Levine. Cheat-proof&lt;br /&gt;
playout for centralized and peer-to-peer gaming. IEEE/ACM&lt;br /&gt;
Transactions on Networking (ToN), 15(1):1–13, Feb. 2007.&lt;br /&gt;
&lt;br /&gt;
[6] C. M¨onch, G. Grimen, and R. Midtstraum. Protecting online&lt;br /&gt;
games against cheating. In Proceedings of the Workshop on Network&lt;br /&gt;
and Systems Support for Games (NetGames), Oct. 2006.&lt;br /&gt;
&lt;br /&gt;
[7] A. Haeberlen, P. Kuznetsov, and P. Druschel. PeerReview: Practical&lt;br /&gt;
accountability for distributed systems. In Proceedings of&lt;br /&gt;
the ACM Symposium on Operating Systems Principles (SOSP),Oct. 2007.&lt;br /&gt;
&lt;br /&gt;
[8] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
[9] VMWare Workstation 6.5.1 web site. http://www.vmware.com/products/workstation/&lt;br /&gt;
&lt;br /&gt;
[10] Counter-Strike http://store.steampowered.com/app/10/&lt;br /&gt;
&lt;br /&gt;
[12] Larry L. Peterson and Bruce S. Davie. Computer Networks a Systems Approach, 2007&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
 We can use this area to discuss or leave notes on general ideas or whatever you want to write here.&lt;br /&gt;
&lt;br /&gt;
-The current due date posted on the site for this essay is November 25th  --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-I think that since we are given the headings to this article, we can easily choose what parts each member would like to work on, obviously since there are more members than parts, multiple members will have to work on the same parts or can work on all parts, I guess it&#039;s really up to you. I know that most people have a lot of projects coming up so let&#039;s try to get this done asap, or at least bit by bit so it&#039;s not something we have to worry too much about. --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I would like to do the Contribution or Critique. -- [[User:Sschnei1|Sschnei1]] 02:40, 20 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I can either work on Background Concepts, or Research problem. -[[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I&#039;m not sure whether the background concepts should be in point form or a paragraph, and whether it needs to be very long or not, but I shall work on both background concepts and research problem with you Jbaubin. --[[User:Mchou2|Mchou2]] 18:11, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-Sounds good, and As i was going to post what I had for research problem, I just saw you posted a big chunk of it. I&#039;ll be out for a while, but tonight I&#039;ll take a serious look at what you write and add what I had written. - [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
- Sorry I didn&#039;t write anything yet to Critique. I&#039;m making my notes and will post something tonight or tomorrow. -- [[User:Sschnei1|Sschnei1]] 14:50, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I have started work on the contribution section. I&#039;ll have something up today or tomorrow. --[[User:Hirving|Hirving]] 19:55, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-if anyone has information that they are working on they can just post it up and at least others can look at it and maybe build up stuff on it, and I&#039;m sure everyone is aware of the extension that we got also, but let&#039;s try to finish this in the next few days --[[User:Mchou2|Mchou2]] 20:43, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I agree with finishing it in the next few days. Then we have more time to focus on other courses like 3004. I will post something later that night. -- [[User:Sschnei1|Sschnei1]] 21:29, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- Just added my contribution section, can someone proof read and sign it before I move it over to the essay. I didn&#039;t do the &amp;quot;why is it better&amp;quot; part because I found the implementation took a lot of writing. For anyone that wants to do the other part, I&#039;d suggest comparing AVMs to PunkBuster and/or VAC, and a cloud computing service (focusing on the auditing). Cheers --[[User:Hirving|Hirving]] 19:44, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I started that what is better/worse part in the Critique section. I will add the comparison with AVMs to Punkbuster and/or VAC soon. I personally feel like there is not that much to write for the Critique section. -- [[User:Sschnei1|Sschnei1]] 20:39, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-Hay. I got a bit to add to your Critique section section. Its mostly expanding on your last paragraph and a bit on how the tests were performed. ill post my stuff later tonight, I just need to find some sources for my argument.--[[User:Pcox|Pcox]] 01:06, 25 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-I read through critiques and will post some modification. I was wondering the last point of critique says that the author didn&#039;t mention recording the game.  In page 2 they did, and &lt;br /&gt;
&amp;quot;However, replay by itself is not sufficient to detect faults on a re-&lt;br /&gt;
mote machine, since the machine could record incorrect&lt;br /&gt;
information in such a way that the replay looks correct,&lt;br /&gt;
or provide inconsistent information to different auditors&lt;br /&gt;
&amp;quot;&lt;br /&gt;
So should we remove the last point?&lt;br /&gt;
Also, the second and third paragraph in the Critique does not critique really anything but more states the contribution in the paper, should we keep it?&lt;br /&gt;
&lt;br /&gt;
- Sure post some modifications. What I meant in the first part is, that the game has an internal recording mechanism to record a 1:1 video of your in-game screen, which can be replayed from in-game itself. I thought it&#039;s useful to put in, but if it&#039;s unnecessary for the paper, then we can take it out.&lt;br /&gt;
&lt;br /&gt;
- I did some modification in critique and moved it to the front page. - [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
- Great. I apologize that I did not manage to get more written. Other courses kept me occupied. - [[User:Sschnei1|Sschnei1]]&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5725</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5725"/>
		<updated>2010-11-30T13:22:55Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Group Essay 2 =&lt;br /&gt;
&lt;br /&gt;
Hello Group. Please post your information here. I assume everybody read the email at your connect account. Anyone specific wants to send him the email with the group members inside? If not, I just go ahead tomorrow at about 13:00 and send the email with the group members who wrote their contact information in here. - [[User:Sschnei1|Sschnei1]] 03:25, 15 November 2010 (UTC)&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider sschnei1@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Matthew Chou mchou2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Mark Walts mwalts@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Henry Irving hirving@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Jean-Benoit Aubin jbaubin@connect.carleton.ca &lt;br /&gt;
&lt;br /&gt;
Pradhan Nishant npradhan npradhan@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Only Paul Cox didn&#039;t answer i sent this morning. &lt;br /&gt;
&lt;br /&gt;
Cox     Paul    pcox&lt;br /&gt;
&lt;br /&gt;
And I just sent an email to the teacher. &lt;br /&gt;
&lt;br /&gt;
--Jean-Benoit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Paper==&lt;br /&gt;
&lt;br /&gt;
 the paper&#039;s title, authors, and their affiliations. Include a link to the paper and any particularly helpful supplementary information.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Title:&#039;&#039;&#039; Accountable Virtual Machines&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039; Andreas Haeberlen, Paarijaat Aditya, Rodrigo Rodrigues, Peter Druschel&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Affiliates:&#039;&#039;&#039;&lt;br /&gt;
University of Pennsylvania, Max Planck Institute for Software Systems (MPI-SWS)]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Link to Paper:&#039;&#039;&#039; [http://www.usenix.org/events/osdi10/tech/full_papers/Haeberlen.pdf Accountable Virtual Machines]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Supplementary Information:&#039;&#039;&#039; [http://research.microsoft.com/en-us/people/sriram/druschel.pptx Accountable distributed systems and the accountable cloud] - background of similar AVM implementation for distributed systems.&lt;br /&gt;
&lt;br /&gt;
==Background Concepts==&lt;br /&gt;
&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountable Virtual Machine (AVM)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Deterministic Replay&#039;&#039;&#039;: A machine can record its executions into a file so that it can be replayed in order to see the executions and follow what was happening on the machine. Remus [[#References | [1]]] has contributed a highly efficient snap-shotting mechanism for these replays.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountability:&#039;&#039;&#039; Accountability in the context of this paper means that every action done on the virtual machine is recorded and will be used against the machine or user to verify the correctness of the application. The AVM is responsible of its action and will answers for its action against an auditor. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Remote Fault Detection:&#039;&#039;&#039; There are programs like GridCop[[#References | [2]]] that can be used to monitor the progress  and execution of a remotely executing program by requesting a beacon packet. When the remote computer is sending the packets, the receiving/logging computer must be a trusted computer (hardware,software, OS) so that the receiving of packets remains consistent. To detect a fault in a remote system, every packet must arrive safely, and any interrupts during the logging must be handled or the inconsistencies will result in an inaccurate outcome. The AVM does not require trusted hardware and can be used over wide-area networks.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cheat Detection:&#039;&#039;&#039; Cheating in games or any specific modification in a program can be either scanned[[#References | [3][4]]] for or prevented[[#References | [5][6]]] by certain programs. The issue with these scanning and preventative software is the knowledge/awareness of specific cheats or situations that the software can handle. An AVM is designed to counter any kind of general cheat.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Integrity Violations:&#039;&#039;&#039; This refers how the consistency of normal/expected operations of an execution does not equal to that of the host/reference (Trusted) execution, hence a violation has occurred.&lt;br /&gt;
&lt;br /&gt;
- The word &amp;quot;node&amp;quot; is used to refer to a computer or server in order to represent the interactions between one computer and another, or a computer and a server.&lt;br /&gt;
&lt;br /&gt;
==Research problem== &lt;br /&gt;
&lt;br /&gt;
 What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
**Possible alternative  for the first part : &lt;br /&gt;
&lt;br /&gt;
The research presented in this paper tries to tackle a problem that has haunted computer scientists for a long time. How can you be sure that the software running on a remote machine is working correctly or as intended. Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a trust relation between users and a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done would be independent of the node and only dependent on the intended software. Let&#039;s say, that node A interacts with node B with execution exe1 and node A interacts with node C also with ex1, but node C has been modified and respond with exe2. Thus, we can assume that the respond of B and C will be different. Being able to prove that the node C has been modified without any doubt is the purpose of this paper. &lt;br /&gt;
***Let me know what you think about it. I removed the redundant part, and I think made it clearer and more concise. [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
** looks good to me, we&#039;ll put this part into the final essay instead of mine below --[[User:Mchou2|Mchou2]] 20:03, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
/// omit&lt;br /&gt;
&lt;br /&gt;
Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a system of trust between users and a host. These different examples must have a certain amount of trust between the interactions of one user and another, as well as the user interacting with a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done with node A is the same it would be done with another node, node B. Let&#039;s say for example that node A interacts with node B with execution exe1, now when node A and B interact with node C, they would both expect to interact with execution exe1, but what happens if node C interacts differently and executes with exe2, then it would be beneficial to be notified of this difference. The previous explanation might not seem too relevant without some examples, such as; Node A is playing a game with node B, the game executed on node B is the same as on A, now when node A plays with node C, node C is executing the same operations as node A plus a cheating program; when node A buys some products from node B&#039;s server, the server processes the order and then deletes node A&#039;s sensitive information, denoted by execution 1, now when node A buys from node C&#039;s server, the order is processed as well as the sensitive information that node A has provided is also rerouted to another server so that it can be used without permission. These are only a few examples where the operations in an execution is necessary to be logged and verified. The problem that is trying to be handled here is to create a procedure that can be done so that a node can be known as accountable, and to log the operations in an execution to provide evidence of these faults done by a node. &lt;br /&gt;
&lt;br /&gt;
////&lt;br /&gt;
&lt;br /&gt;
Previous work that has been done in efforts to prevent or detect integrity violations can be separated into different categories of operations. The first would be Cheat Detection, where in many different games there are cheats that users use to usually create benefits for themselves that was not intended by the original game.[[#References |[4]]] These detectors are not dynamic, in the sense that they do not actually detect whether a cheat is being used, more so they are checking if there is a cheating operation that they have logged before, being operated on the user&#039;s system. For example, if there was a known cheating program named aimbot.exe that can be run in the background of a game such as CounterStrike, and the PunkBuster system that was implemented on the user&#039;s system had the aimbot.exe program already logged as a cheating program from the developers, the PunkBuster program might notify the current game servers of this or even prevent the user from playing any games until the aimbot.exe operation is no longer running. &lt;br /&gt;
&lt;br /&gt;
Accountability is another important problem that many have already worked on. The main goal of an accountable system is to be able to determine without a doubt that node is faulty and can prove it with solid evidence. It can also be used to defend a node when threatened with false accusation. Numerous systems already use accountability in their system, but they were mostly all linked to specific applications, where a point of reference must be used to compare. As example PeerReview[[#References |[7]]], which is a system closely related to what the research team have worked on,   must be implemented into the application which makes it less portable and cannot be implemented as easily as an AVM. PeerReview verifies the inbound and outbound packets and can see if the software is running as intended. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another problem that is related to the paper is remote fault detection in a distributed system. How can we determine if a remote node is running the code correctly or if the machine itself is working as intended. Network activity is a common solution to this problem, as they look at the inbound and outbound of the node. This can let them know how the software is operating, or in the case of AVM how the whole virtual machine is working. Gridcop[[#References |[8]]] is another example that inspects a small number of packets periodically.  Another way of determining the fault remotely is to use a trusted node,  where it can tell immediately if a fault occurs or a modification is made where it should not have been made. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-and anything else you would to add or modify, or leave a note in the discussion sections if you want me to relook or change something. --[[User:Mchou2|Mchou2]] 20:10, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The problem of logging and auditing the processes of an execution of a specific node (computer) is greatly dependent on the work done for deterministic replay. Deterministic replay programs can create a log file that can be used to replay the operations done for some execution that occurs on a node. Replaying the operations done on the node can show what the node was doing, and this would seem like it is sufficient in finding out whether a node was causing integrity violations or not. The concept of snap-shoting/recording the operations is not the issue with deterministic replay, it is the fact that the data being outputted into the replay may be tampered with by the node itself so that it generates optimal results in replay. By faking the results of the operations, the auditing computer will falsely believe that the tested computer is running all operations as normal. The logging operations done by these recording programs can be directly related to the work needed to detect integrity violations.&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
&lt;br /&gt;
 What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The accountable virtual machine (AVM), that was proposed in this essay, most useful contribution was the implementation of the accountable virtual machine monitor (AVMM). It is what allows for the fault checking of virtual machines in a cloud computing environment. The AVMM can be broken down into different parts: the virtual machine monitor (VMM), the temper-evident log, and auditing mechanisms.  The VMM is based off the VMM found in VMWare Workstation 6.5.1[[#References |[9]]], the temper-evident log was adapted from code in PeerReview[[#References |[7]]], and the audit tools were built up from scratch. &lt;br /&gt;
&lt;br /&gt;
The accountable virtual machine monitor relies on four assumptions:&lt;br /&gt;
&lt;br /&gt;
1. All transmitted messages are received, retransmitted if needed.&lt;br /&gt;
&lt;br /&gt;
2. Machines and Users have access to a hash function that is pre-image resistant, second pre-image resistant, and collision resistant.&lt;br /&gt;
&lt;br /&gt;
3. All parties have a certified keypair, that can be used to sign messages.&lt;br /&gt;
&lt;br /&gt;
4. To audit a log, the user has a reference copy of the VM used.&lt;br /&gt;
The job of the AVMM is to record all incoming and outgoing messages to a tamper-evident log&lt;br /&gt;
and enough info of the execution to enable deterministic replay. &lt;br /&gt;
&lt;br /&gt;
The AVMM must record nondeterministic inputs (such as hardware interrupts), because the input is asynchronous, and the exact timing of input must be recorded so the inputs can be  injected at the same moment during the replay. Wall-clock time is not accurate enough for this recording, so the AVMM must use a combination of instruction pointer, branch counter, and additional registers. Not all inputs have to be recorded this way (software interrupts) because they send requests to the AVM, which will be issued again during replay.     &lt;br /&gt;
&lt;br /&gt;
Two parallels streams appear in the tamper-evident log: message exchanges and nondeterministic inputs. &lt;br /&gt;
It is important for the AVMM to detect inconsistencies between the user&#039;s log and the machine&#039;s log (in case of foul play), so the AVMM simply cross-references messages and inputs during replay, thus, easily detecting any discrepancies.&lt;br /&gt;
&lt;br /&gt;
The AVMM periodically takes snapshots of the AVM&#039;s current state, this facilitates fine-grain audits for the user, but it also increases overhead. The overhead is lowered slightly by the snapshots being incremental (only save the state that has been changed since the last snapshot). The user can authenticate the snapshot using a hash tree of the state (generated by the AVMM) and it can update the hash tree after each snapshot.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tamper-Evident Log&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The log is made up of hash code entries.&lt;br /&gt;
Each log entry in form e = (s,t,c,h)&lt;br /&gt;
s = monotonically increasing sequence number&lt;br /&gt;
t = type&lt;br /&gt;
c = data of the type&lt;br /&gt;
h = hash value&lt;br /&gt;
&lt;br /&gt;
The hash value is calculated by: h = H(hi-1 || s || t || H(c))&lt;br /&gt;
H() is a hash function.&lt;br /&gt;
|| stands for concatenation&lt;br /&gt;
&lt;br /&gt;
Each message sent gets signed with a private key, when the AVMM logs the messages with the signature attached but removes it before sending it to the AVM.   To ensure nonrepudiation, an authenticator is attached to each outgoing message.&lt;br /&gt;
&lt;br /&gt;
To detect when a message is dropped, each party sends an acknowledgement for each message they receive. If an acknowledgement is not received the message is resent a few times, if the user stops receiving messages, then the machine is presumed to have failed.&lt;br /&gt;
&lt;br /&gt;
To preform a log check, the user retrieves a pair of authenticators, then challenges the machine to produce the log segment between the two. The log is computationally infeasible to edit without breaking the hash chain, thus, if the log has been tampered with, the hash chain will be different and the user will notified of the tampering.&lt;br /&gt;
&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Auditing Mechanism&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
From VMM&#039;s perspective all things are deterministic.&lt;br /&gt;
&lt;br /&gt;
To perform a audit, the user:&lt;br /&gt;
&lt;br /&gt;
1. obtains a segment of the machine&#039;s log and the authenticators&lt;br /&gt;
&lt;br /&gt;
2. downloads a snapshot of the AVM at the beginning of the segment&lt;br /&gt;
&lt;br /&gt;
3. replays the entire segment, starting from the snapshot, to verify the events in the log are the correct execution of the software.&lt;br /&gt;
&lt;br /&gt;
The user can verify the execution of software through three different methods: Verifying the log, snapshot, and execution.&lt;br /&gt;
&lt;br /&gt;
When the user wants to verify a log segment, the user retrieves the authenticators from the machine with the sequence numbers in the range of the log segment. The user then downloads the log segment from the machine, and, starting with the most recent snapshot before the log segment and ending with the most recent snapshot before the end of the log segment. The user then checks the authenticators for tampering. If this step proceeds, the user can assume the log segment executed properly. If the machine is faulty, the segment will be unavailable to download or may return a corrupted log segment. This can be used to convince a third party of the fault.&lt;br /&gt;
&lt;br /&gt;
When the user wants to verify the snapshot, the user obtains a snapshot of the AVM&#039;s state at the beginning of the log segment. The user then downloads a snapshot from the machine and the AVMM recomputes the hash tree. The new hash tree is compared to the hash tree contained in the orignal log segment. If any discrepancies are detected, the user can use this to convince a third party of the machine&#039;s faults.&lt;br /&gt;
&lt;br /&gt;
In order for the user to verifying the execution of a log segment, the user needs three inputs: the log segment, the snapshot, and the public keys of the machine and any users of the machine. The auditing tool performs two checks on the log segment, a syntactic check (determines if log is well-formed), and a semantic check (determines if the information in the log shows the correct execution of the machine).&lt;br /&gt;
&lt;br /&gt;
The syntactic check checks whether all log entries are in the proper format, the signatures in each message and acknowledgement, if each message was acknowledged, and the sequence of sent and received messages is correct when compared to the sequence of messages that enter and exit the AVM.&lt;br /&gt;
&lt;br /&gt;
The semantic check creates a local VM that will execute the machine&#039;s log segment, the VM is initialized with a snapshot from the machine if possible. The local VM then runs the log segment and the data is recorded. The auditing tool then checks the log segments, inputs, outputs, and verification of snapshot hashes of the replayed execution against the original log. If any discrepancies are detected then the fault is reported and can be used as evidence against the machine.&lt;br /&gt;
&lt;br /&gt;
Why is it better?&lt;br /&gt;
[To Do]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I read through it and fixed a few missing letters here and there, so if someone else could read it as well and then sign under me we can probably move it to the essay. Thanks . --[[User:Mchou2|Mchou2]] 23:53, 25 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-I just read it and fixed some small parts. Looks good. --[[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// first part of my writing; this is just part1 [[User:Sschnei1|Sschnei1]] 00:35, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The layout of the paper is primordial for the comprehension of the reader. The introduction clearly describes what the reader has to expect in the following pages, especially what problems are addressed and how they are solved. &lt;br /&gt;
&lt;br /&gt;
This paper gives multiple examples about advantages and disadvantages in an AVM. A good example is &amp;quot;Cheat Detection&amp;quot;. Cheaters use programs to go around the original game code to gain an major advantage over other players. Since an AVM is generic in cheat detection it has a wider support for detecting cheats than most of the other cheat detection algorithms. The logs give the game the function to replay the game. Thus, players using AVM can see the way other players play by replaying the game with the player&#039;s log.&lt;br /&gt;
&lt;br /&gt;
The negative side is that the player might have to suffer from the AVM. Everything is being logged and stored on the hard drive, which takes a lot amount of space. In the example in the paper it is 148mb per hour after compression. This reduces the fps. Additionally, the connection to the AVM increases the ping time to the server. &lt;br /&gt;
&lt;br /&gt;
As a proof of concept, they used their AVM in the online game Counter Strike and tried to detect online cheats. They were using “Dell Precision T1500 workstations, with 8 GB of memory and 2.8 GHz Intel Core i7 860 CPUs”[pg 10]. These machines are considerably more high powered than the system requirements of Counter-Strike, which are “500 MHz processor, 96 MB RAM”[10]. A 10 year old game [10] should use fewer resources on a Dell Precision T1500 workstations. In comparison, newer games consume far more resources than Counter-Strike giving it less room to run the AVM. A 13% slowdown [pg 12.] in a game where you are only getting 30 to 40 fps is a pretty noticeable slowdown. This is very detrimental to the game play because having over 60fps is the optimal performance.&lt;br /&gt;
&lt;br /&gt;
In the paper the authors state that the AVM will only generate an extra 5ms of latency. While this does not seem like a lot the measurement was taken over a LAN with all the computers connected to the same switch [pg. 12]. This sample does not accurately represent real life situations and therefore lacks external validity, since many of these online games are played over the internet with the participants sometimes not even on the same continent; the latency overhead of the AVM would certainly increase due to the added distance. [12]&lt;br /&gt;
&lt;br /&gt;
Additional Critiques:&lt;br /&gt;
&lt;br /&gt;
While the paper does test a slightly larger then one to one scenario, it certainly does not test in a 1:16, 1:32, 1:64 or much higher scenario that would likely exist in a real world application.&lt;br /&gt;
&lt;br /&gt;
In order to keep a lower overhead, spot checking is necessary, and leave a chance of a fault going undetected in a worst case.&lt;br /&gt;
&lt;br /&gt;
More and more programs are using more then one cpu core, which cannot be efficiently deterministically logged at this time. Fortunately it has been shown to be possible if with a large overhead, and could potentially be reasonable at a later date.&lt;br /&gt;
&lt;br /&gt;
The paper repeatedly claims that AVM&#039;s could be used for arbitrary applications but only ever shows evidence of one, counterstrike.&lt;br /&gt;
&lt;br /&gt;
AVM&#039;s are only extremely effective against one type of cheating, that which gives incorrect networking messages. While it was shown in the paper to be effective at catching current cheat programs that require installation on the VM, those could be evolved to exist on the hostmachine, and avoid the issue of a AVM entirely. Further, since an AVM wouldn&#039;t even catch installing a cheat program as faulty without disabling installation while in use, no installation/updating can go on while the program is in use, which may not be desirable.&lt;br /&gt;
&lt;br /&gt;
AVM&#039;s will not in any way catch any bug or exploit in a program that a malicious user could exploit, as the exploit would appear on both user/monitor systems and perform the same.&lt;br /&gt;
&lt;br /&gt;
// more Critiques&lt;br /&gt;
&lt;br /&gt;
For their use case, the authors did not state that in counterstrike the user can record a demo of his current game. Some online playing leagues require every player to record his own demo and upload it to the website, where every person in the league can watch it. Without this demo the team lost the match immediately. &lt;br /&gt;
Additionally, some leagues require the player to start an extra program (e.g. Electronic Sports League WIRE), which checks the programs running in the background. It also takes random snapshots of the current player and compresses all information into a file and uploads it to one of the server in the online league, where it can be checked by any player.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
 You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
[1] B. Cully, G. Lefebvre, D. Meyer, M. Feeley, N. Hutchinson, and&lt;br /&gt;
A. Warfield. Remus: High availability via asynchronous virtual&lt;br /&gt;
machine replication. In Proceedings of the USENIX Symposium&lt;br /&gt;
on Networked Systems Design and Implementation (NSDI), Apr.&lt;br /&gt;
2008.&lt;br /&gt;
&lt;br /&gt;
[2] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
[3] G. Hoglund. 4.5 million copies of EULA-compliant spyware.&lt;br /&gt;
http://www.rootkit.com/blog.php?newsid=358.&lt;br /&gt;
&lt;br /&gt;
[4] PunkBuster web site. http://www.evenbalance.com/.&lt;br /&gt;
&lt;br /&gt;
[5] N. E. Baughman, M. Liberatore, and B. N. Levine. Cheat-proof&lt;br /&gt;
playout for centralized and peer-to-peer gaming. IEEE/ACM&lt;br /&gt;
Transactions on Networking (ToN), 15(1):1–13, Feb. 2007.&lt;br /&gt;
&lt;br /&gt;
[6] C. M¨onch, G. Grimen, and R. Midtstraum. Protecting online&lt;br /&gt;
games against cheating. In Proceedings of the Workshop on Network&lt;br /&gt;
and Systems Support for Games (NetGames), Oct. 2006.&lt;br /&gt;
&lt;br /&gt;
[7] A. Haeberlen, P. Kuznetsov, and P. Druschel. PeerReview: Practical&lt;br /&gt;
accountability for distributed systems. In Proceedings of&lt;br /&gt;
the ACM Symposium on Operating Systems Principles (SOSP),Oct. 2007.&lt;br /&gt;
&lt;br /&gt;
[8] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
[9] VMWare Workstation 6.5.1 web site. http://www.vmware.com/products/workstation/&lt;br /&gt;
&lt;br /&gt;
[10] Counter-Strike http://store.steampowered.com/app/10/&lt;br /&gt;
&lt;br /&gt;
[12] Larry L. Peterson and Bruce S. Davie. Computer Networks a Systems Approach, 2007&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
 We can use this area to discuss or leave notes on general ideas or whatever you want to write here.&lt;br /&gt;
&lt;br /&gt;
-The current due date posted on the site for this essay is November 25th  --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-I think that since we are given the headings to this article, we can easily choose what parts each member would like to work on, obviously since there are more members than parts, multiple members will have to work on the same parts or can work on all parts, I guess it&#039;s really up to you. I know that most people have a lot of projects coming up so let&#039;s try to get this done asap, or at least bit by bit so it&#039;s not something we have to worry too much about. --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I would like to do the Contribution or Critique. -- [[User:Sschnei1|Sschnei1]] 02:40, 20 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I can either work on Background Concepts, or Research problem. -[[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I&#039;m not sure whether the background concepts should be in point form or a paragraph, and whether it needs to be very long or not, but I shall work on both background concepts and research problem with you Jbaubin. --[[User:Mchou2|Mchou2]] 18:11, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-Sounds good, and As i was going to post what I had for research problem, I just saw you posted a big chunk of it. I&#039;ll be out for a while, but tonight I&#039;ll take a serious look at what you write and add what I had written. - [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
- Sorry I didn&#039;t write anything yet to Critique. I&#039;m making my notes and will post something tonight or tomorrow. -- [[User:Sschnei1|Sschnei1]] 14:50, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I have started work on the contribution section. I&#039;ll have something up today or tomorrow. --[[User:Hirving|Hirving]] 19:55, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-if anyone has information that they are working on they can just post it up and at least others can look at it and maybe build up stuff on it, and I&#039;m sure everyone is aware of the extension that we got also, but let&#039;s try to finish this in the next few days --[[User:Mchou2|Mchou2]] 20:43, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I agree with finishing it in the next few days. Then we have more time to focus on other courses like 3004. I will post something later that night. -- [[User:Sschnei1|Sschnei1]] 21:29, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- Just added my contribution section, can someone proof read and sign it before I move it over to the essay. I didn&#039;t do the &amp;quot;why is it better&amp;quot; part because I found the implementation took a lot of writing. For anyone that wants to do the other part, I&#039;d suggest comparing AVMs to PunkBuster and/or VAC, and a cloud computing service (focusing on the auditing). Cheers --[[User:Hirving|Hirving]] 19:44, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I started that what is better/worse part in the Critique section. I will add the comparison with AVMs to Punkbuster and/or VAC soon. I personally feel like there is not that much to write for the Critique section. -- [[User:Sschnei1|Sschnei1]] 20:39, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-Hay. I got a bit to add to your Critique section section. Its mostly expanding on your last paragraph and a bit on how the tests were performed. ill post my stuff later tonight, I just need to find some sources for my argument.--[[User:Pcox|Pcox]] 01:06, 25 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-I read through critiques and will post some modification. I was wondering the last point of critique says that the author didn&#039;t mention recording the game.  In page 2 they did, and &lt;br /&gt;
&amp;quot;However, replay by itself is not sufficient to detect faults on a re-&lt;br /&gt;
mote machine, since the machine could record incorrect&lt;br /&gt;
information in such a way that the replay looks correct,&lt;br /&gt;
or provide inconsistent information to different auditors&lt;br /&gt;
&amp;quot;&lt;br /&gt;
So should we remove the last point?&lt;br /&gt;
Also, the second and third paragraph in the Critique does not critique really anything but more states the contribution in the paper, should we keep it?&lt;br /&gt;
&lt;br /&gt;
- Sure post some modifications. What I meant in the first part is, that the game has an internal recording mechanism to record a 1:1 video of your in-game screen, which can be replayed from in-game itself. I thought it&#039;s useful to put in, but if it&#039;s unnecessary for the paper, then we can take it out.&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5692</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5692"/>
		<updated>2010-11-29T17:35:31Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Critique */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Group Essay 2 =&lt;br /&gt;
&lt;br /&gt;
Hello Group. Please post your information here. I assume everybody read the email at your connect account. Anyone specific wants to send him the email with the group members inside? If not, I just go ahead tomorrow at about 13:00 and send the email with the group members who wrote their contact information in here. - [[User:Sschnei1|Sschnei1]] 03:25, 15 November 2010 (UTC)&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider sschnei1@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Matthew Chou mchou2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Mark Walts mwalts@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Henry Irving hirving@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Jean-Benoit Aubin jbaubin@connect.carleton.ca &lt;br /&gt;
&lt;br /&gt;
Pradhan Nishant npradhan npradhan@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Only Paul Cox didn&#039;t answer i sent this morning. &lt;br /&gt;
&lt;br /&gt;
Cox     Paul    pcox&lt;br /&gt;
&lt;br /&gt;
And I just sent an email to the teacher. &lt;br /&gt;
&lt;br /&gt;
--Jean-Benoit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Paper==&lt;br /&gt;
&lt;br /&gt;
 the paper&#039;s title, authors, and their affiliations. Include a link to the paper and any particularly helpful supplementary information.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Title:&#039;&#039;&#039; Accountable Virtual Machines&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039; Andreas Haeberlen, Paarijaat Aditya, Rodrigo Rodrigues, Peter Druschel&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Affiliates:&#039;&#039;&#039;&lt;br /&gt;
University of Pennsylvania, Max Planck Institute for Software Systems (MPI-SWS)]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Link to Paper:&#039;&#039;&#039; [http://www.usenix.org/events/osdi10/tech/full_papers/Haeberlen.pdf Accountable Virtual Machines]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Supplementary Information:&#039;&#039;&#039; [http://research.microsoft.com/en-us/people/sriram/druschel.pptx Accountable distributed systems and the accountable cloud] - background of similar AVM implementation for distributed systems.&lt;br /&gt;
&lt;br /&gt;
==Background Concepts==&lt;br /&gt;
&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountable Virtual Machine (AVM)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Deterministic Replay&#039;&#039;&#039;: A machine can record its executions into a file so that it can be replayed in order to see the executions and follow what was happening on the machine. Remus [[#References | [1]]] has contributed a highly efficient snap-shotting mechanism for these replays.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountability:&#039;&#039;&#039; Accountability in the context of this paper means that every action done on the virtual machine is recorded and will be used against the machine or user to verify the correctness of the application. The AVM is responsible of its action and will answers for its action against an auditor. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Remote Fault Detection:&#039;&#039;&#039; There are programs like GridCop[[#References | [2]]] that can be used to monitor the progress  and execution of a remotely executing program by requesting a beacon packet. When the remote computer is sending the packets, the receiving/logging computer must be a trusted computer (hardware,software, OS) so that the receiving of packets remains consistent. To detect a fault in a remote system, every packet must arrive safely, and any interrupts during the logging must be handled or the inconsistencies will result in an inaccurate outcome. The AVM does not require trusted hardware and can be used over wide-area networks.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cheat Detection:&#039;&#039;&#039; Cheating in games or any specific modification in a program can be either scanned[[#References | [3][4]]] for or prevented[[#References | [5][6]]] by certain programs. The issue with these scanning and preventative software is the knowledge/awareness of specific cheats or situations that the software can handle. An AVM is designed to counter any kind of general cheat.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Integrity Violations:&#039;&#039;&#039; This refers how the consistency of normal/expected operations of an execution does not equal to that of the host/reference (Trusted) execution, hence a violation has occurred.&lt;br /&gt;
&lt;br /&gt;
- The word &amp;quot;node&amp;quot; is used to refer to a computer or server in order to represent the interactions between one computer and another, or a computer and a server.&lt;br /&gt;
&lt;br /&gt;
==Research problem== &lt;br /&gt;
&lt;br /&gt;
 What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
**Possible alternative  for the first part : &lt;br /&gt;
&lt;br /&gt;
The research presented in this paper tries to tackle a problem that has haunted computer scientists for a long time. How can you be sure that the software running on a remote machine is working correctly or as intended. Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a trust relation between users and a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done would be independent of the node and only dependent on the intended software. Let&#039;s say, that node A interacts with node B with execution exe1 and node A interacts with node C also with ex1, but node C has been modified and respond with exe2. Thus, we can assume that the respond of B and C will be different. Being able to prove that the node C has been modified without any doubt is the purpose of this paper. &lt;br /&gt;
***Let me know what you think about it. I removed the redundant part, and I think made it clearer and more concise. [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
** looks good to me, we&#039;ll put this part into the final essay instead of mine below --[[User:Mchou2|Mchou2]] 20:03, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
/// omit&lt;br /&gt;
&lt;br /&gt;
Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a system of trust between users and a host. These different examples must have a certain amount of trust between the interactions of one user and another, as well as the user interacting with a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done with node A is the same it would be done with another node, node B. Let&#039;s say for example that node A interacts with node B with execution exe1, now when node A and B interact with node C, they would both expect to interact with execution exe1, but what happens if node C interacts differently and executes with exe2, then it would be beneficial to be notified of this difference. The previous explanation might not seem too relevant without some examples, such as; Node A is playing a game with node B, the game executed on node B is the same as on A, now when node A plays with node C, node C is executing the same operations as node A plus a cheating program; when node A buys some products from node B&#039;s server, the server processes the order and then deletes node A&#039;s sensitive information, denoted by execution 1, now when node A buys from node C&#039;s server, the order is processed as well as the sensitive information that node A has provided is also rerouted to another server so that it can be used without permission. These are only a few examples where the operations in an execution is necessary to be logged and verified. The problem that is trying to be handled here is to create a procedure that can be done so that a node can be known as accountable, and to log the operations in an execution to provide evidence of these faults done by a node. &lt;br /&gt;
&lt;br /&gt;
////&lt;br /&gt;
&lt;br /&gt;
Previous work that has been done in efforts to prevent or detect integrity violations can be separated into different categories of operations. The first would be Cheat Detection, where in many different games there are cheats that users use to usually create benefits for themselves that was not intended by the original game.[[#References |[4]]] These detectors are not dynamic, in the sense that they do not actually detect whether a cheat is being used, more so they are checking if there is a cheating operation that they have logged before, being operated on the user&#039;s system. For example, if there was a known cheating program named aimbot.exe that can be run in the background of a game such as CounterStrike, and the PunkBuster system that was implemented on the user&#039;s system had the aimbot.exe program already logged as a cheating program from the developers, the PunkBuster program might notify the current game servers of this or even prevent the user from playing any games until the aimbot.exe operation is no longer running. &lt;br /&gt;
&lt;br /&gt;
Accountability is another important problem that many have already worked on. The main goal of an accountable system is to be able to determine without a doubt that node is faulty and can prove it with solid evidence. It can also be used to defend a node when threatened with false accusation. Numerous systems already use accountability in their system, but they were mostly all linked to specific applications, where a point of reference must be used to compare. As example PeerReview[[#References |[7]]], which is a system closely related to what the research team have worked on,   must be implemented into the application which makes it less portable and cannot be implemented as easily as an AVM. PeerReview verifies the inbound and outbound packets and can see if the software is running as intended. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another problem that is related to the paper is remote fault detection in a distributed system. How can we determine if a remote node is running the code correctly or if the machine itself is working as intended. Network activity is a common solution to this problem, as they look at the inbound and outbound of the node. This can let them know how the software is operating, or in the case of AVM how the whole virtual machine is working. Gridcop[[#References |[8]]] is another example that inspects a small number of packets periodically.  Another way of determining the fault remotely is to use a trusted node,  where it can tell immediately if a fault occurs or a modification is made where it should not have been made. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-and anything else you would to add or modify, or leave a note in the discussion sections if you want me to relook or change something. --[[User:Mchou2|Mchou2]] 20:10, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The problem of logging and auditing the processes of an execution of a specific node (computer) is greatly dependent on the work done for deterministic replay. Deterministic replay programs can create a log file that can be used to replay the operations done for some execution that occurs on a node. Replaying the operations done on the node can show what the node was doing, and this would seem like it is sufficient in finding out whether a node was causing integrity violations or not. The concept of snap-shoting/recording the operations is not the issue with deterministic replay, it is the fact that the data being outputted into the replay may be tampered with by the node itself so that it generates optimal results in replay. By faking the results of the operations, the auditing computer will falsely believe that the tested computer is running all operations as normal. The logging operations done by these recording programs can be directly related to the work needed to detect integrity violations.&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
&lt;br /&gt;
 What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The accountable virtual machine (AVM), that was proposed in this essay, most useful contribution was the implementation of the accountable virtual machine monitor (AVMM). It is what allows for the fault checking of virtual machines in a cloud computing environment. The AVMM can be broken down into different parts: the virtual machine monitor (VMM), the temper-evident log, and auditing mechanisms.  The VMM is based off the VMM found in VMWare Workstation 6.5.1[[#References |[9]]], the temper-evident log was adapted from code in PeerReview[[#References |[7]]], and the audit tools were built up from scratch. &lt;br /&gt;
&lt;br /&gt;
The accountable virtual machine monitor relies on four assumptions:&lt;br /&gt;
&lt;br /&gt;
1. All transmitted messages are received, retransmitted if needed.&lt;br /&gt;
&lt;br /&gt;
2. Machines and Users have access to a hash function that is pre-image resistant, second pre-image resistant, and collision resistant.&lt;br /&gt;
&lt;br /&gt;
3. All parties have a certified keypair, that can be used to sign messages.&lt;br /&gt;
&lt;br /&gt;
4. To audit a log, the user has a reference copy of the VM used.&lt;br /&gt;
The job of the AVMM is to record all incoming and outgoing messages to a tamper-evident log&lt;br /&gt;
and enough info of the execution to enable deterministic replay. &lt;br /&gt;
&lt;br /&gt;
The AVMM must record nondeterministic inputs (such as hardware interrupts), because the input is asynchronous, and the exact timing of input must be recorded so the inputs can be  injected at the same moment during the replay. Wall-clock time is not accurate enough for this recording, so the AVMM must use a combination of instruction pointer, branch counter, and additional registers. Not all inputs have to be recorded this way (software interrupts) because they send requests to the AVM, which will be issued again during replay.     &lt;br /&gt;
&lt;br /&gt;
Two parallels streams appear in the tamper-evident log: message exchanges and nondeterministic inputs. &lt;br /&gt;
It is important for the AVMM to detect inconsistencies between the user&#039;s log and the machine&#039;s log (in case of foul play), so the AVMM simply cross-references messages and inputs during replay, thus, easily detecting any discrepancies.&lt;br /&gt;
&lt;br /&gt;
The AVMM periodically takes snapshots of the AVM&#039;s current state, this facilitates fine-grain audits for the user, but it also increases overhead. The overhead is lowered slightly by the snapshots being incremental (only save the state that has been changed since the last snapshot). The user can authenticate the snapshot using a hash tree of the state (generated by the AVMM) and it can update the hash tree after each snapshot.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tamper-Evident Log&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The log is made up of hash code entries.&lt;br /&gt;
Each log entry in form e = (s,t,c,h)&lt;br /&gt;
s = monotonically increasing sequence number&lt;br /&gt;
t = type&lt;br /&gt;
c = data of the type&lt;br /&gt;
h = hash value&lt;br /&gt;
&lt;br /&gt;
The hash value is calculated by: h = H(hi-1 || s || t || H(c))&lt;br /&gt;
H() is a hash function.&lt;br /&gt;
|| stands for concatenation&lt;br /&gt;
&lt;br /&gt;
Each message sent gets signed with a private key, when the AVMM logs the messages with the signature attached but removes it before sending it to the AVM.   To ensure nonrepudiation, an authenticator is attached to each outgoing message.&lt;br /&gt;
&lt;br /&gt;
To detect when a message is dropped, each party sends an acknowledgement for each message they receive. If an acknowledgement is not received the message is resent a few times, if the user stops receiving messages, then the machine is presumed to have failed.&lt;br /&gt;
&lt;br /&gt;
To preform a log check, the user retrieves a pair of authenticators, then challenges the machine to produce the log segment between the two. The log is computationally infeasible to edit without breaking the hash chain, thus, if the log has been tampered with, the hash chain will be different and the user will notified of the tampering.&lt;br /&gt;
&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Auditing Mechanism&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
From VMM&#039;s perspective all things are deterministic.&lt;br /&gt;
&lt;br /&gt;
To perform a audit, the user:&lt;br /&gt;
&lt;br /&gt;
1. obtains a segment of the machine&#039;s log and the authenticators&lt;br /&gt;
&lt;br /&gt;
2. downloads a snapshot of the AVM at the beginning of the segment&lt;br /&gt;
&lt;br /&gt;
3. replays the entire segment, starting from the snapshot, to verify the events in the log are the correct execution of the software.&lt;br /&gt;
&lt;br /&gt;
The user can verify the execution of software through three different methods: Verifying the log, snapshot, and execution.&lt;br /&gt;
&lt;br /&gt;
When the user wants to verify a log segment, the user retrieves the authenticators from the machine with the sequence numbers in the range of the log segment. The user then downloads the log segment from the machine, and, starting with the most recent snapshot before the log segment and ending with the most recent snapshot before the end of the log segment. The user then checks the authenticators for tampering. If this step proceeds, the user can assume the log segment executed properly. If the machine is faulty, the segment will be unavailable to download or may return a corrupted log segment. This can be used to convince a third party of the fault.&lt;br /&gt;
&lt;br /&gt;
When the user wants to verify the snapshot, the user obtains a snapshot of the AVM&#039;s state at the beginning of the log segment. The user then downloads a snapshot from the machine and the AVMM recomputes the hash tree. The new hash tree is compared to the hash tree contained in the orignal log segment. If any discrepancies are detected, the user can use this to convince a third party of the machine&#039;s faults.&lt;br /&gt;
&lt;br /&gt;
In order for the user to verifying the execution of a log segment, the user needs three inputs: the log segment, the snapshot, and the public keys of the machine and any users of the machine. The auditing tool performs two checks on the log segment, a syntactic check (determines if log is well-formed), and a semantic check (determines if the information in the log shows the correct execution of the machine).&lt;br /&gt;
&lt;br /&gt;
The syntactic check checks whether all log entries are in the proper format, the signatures in each message and acknowledgement, if each message was acknowledged, and the sequence of sent and received messages is correct when compared to the sequence of messages that enter and exit the AVM.&lt;br /&gt;
&lt;br /&gt;
The semantic check creates a local VM that will execute the machine&#039;s log segment, the VM is initialized with a snapshot from the machine if possible. The local VM then runs the log segment and the data is recorded. The auditing tool then checks the log segments, inputs, outputs, and verification of snapshot hashes of the replayed execution against the original log. If any discrepancies are detected then the fault is reported and can be used as evidence against the machine.&lt;br /&gt;
&lt;br /&gt;
Why is it better?&lt;br /&gt;
[To Do]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I read through it and fixed a few missing letters here and there, so if someone else could read it as well and then sign under me we can probably move it to the essay. Thanks . --[[User:Mchou2|Mchou2]] 23:53, 25 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-I just read it and fixed some small parts. Looks good. --[[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// first part of my writing; this is just part1 [[User:Sschnei1|Sschnei1]] 00:35, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
For the comprehension of the reader, it is important of a paper/article/essay to have a good overview/layout. The introduction clearly describes what the reader has to expect in the following pages, especially what problems are addressed and how they are solved. &lt;br /&gt;
&lt;br /&gt;
This paper gives multiple examples about advantages and disadvantages in an AVM. A good example is &amp;quot;Cheat Detection&amp;quot;. Cheaters use programs to go around the original game code to gain an major advantage over other players. Since an AVM is generic in cheat detection it has a wider support for detecting cheats than most of the other cheat detection algorithms. The logs give the game the function to replay the game. Thus, players using AVM can see the way other players play by replaying the game with the player&#039;s log.&lt;br /&gt;
&lt;br /&gt;
The negative side is that the player might have to suffer from the AVM. Everything is being logged and stored on the hard drive, which takes a lot amount of space. In the example in the paper it is 148mb per hour after compression. This reduces the fps. Additionally, the connection to the AVM increases the ping time to the server. &lt;br /&gt;
&lt;br /&gt;
The test case for the AVM was using it to detect people using cheats in the popular online game Counter-Strike. They were using “Dell Precision T1500 workstations, with 8 GB of memory and 2.8 GHz Intel Core i7 860 CPUs”[pg 10]. These machines are considerably more high powered than the system requirements of Counter-Strike, which are “500 MHz processor, 96 MB RAM”[10]. A 10 year old game [10] should use fewer resources on a Dell Precision T1500 workstations. In comparison, newer games consume far more resources than Counter-Strike giving it less room to run the AVM. A 13% slowdown [pg 12.] in a game where you are only getting 30 to 40 fps is a pretty noticeable slowdown. This is very detrimental to the game play because having over 60fps is the optimal performance.&lt;br /&gt;
&lt;br /&gt;
In the paper the authors state that the AVM will only generate an extra 5ms of latency. While this does not seem like a lot the measurement was taken over a LAN with all the computers connected to the same switch [pg. 12]. This sample does not accurately represent real life situations and therefore lacks external validity, since many of these online games are played over the internet with the participants sometimes not even on the same continent; the latency overhead of the AVM would certainly increase due to the added distance. [12]&lt;br /&gt;
&lt;br /&gt;
Additional Critiques:&lt;br /&gt;
&lt;br /&gt;
While the paper does test a slightly larger then one to one scenario, it certainly does not test in a 1:16, 1:32, 1:64 or much higher scenario that would likely exist in a real world application.&lt;br /&gt;
&lt;br /&gt;
In order to keep a lower overhead, spot checking is necessary, and leave a chance of a fault going undetected in a worst case.&lt;br /&gt;
&lt;br /&gt;
More and more programs are using more then one cpu core, which cannot be efficiently deterministically logged at this time. Fortunately it has been shown to be possible if with a large overhead, and could potentially be reasonable at a later date.&lt;br /&gt;
&lt;br /&gt;
The paper repeatedly claims that AVM&#039;s could be used for arbitrary applications but only ever shows evidence of one, counterstrike.&lt;br /&gt;
&lt;br /&gt;
AVM&#039;s are only extremely effective against one type of cheating, that which gives incorrect networking messages. While it was shown in the paper to be effective at catching current cheat programs that require installation on the VM, those could be evolved to exist on the hostmachine, and avoid the issue of a AVM entirely. Further, since an AVM wouldn&#039;t even catch installing a cheat program as faulty without disabling installation while in use, no installation/updating can go on while the program is in use, which may not be desirable.&lt;br /&gt;
&lt;br /&gt;
AVM&#039;s will not in any way catch any bug or exploit in a program that a malicious user could exploit, as the exploit would appear on both user/monitor systems and perform the same.&lt;br /&gt;
&lt;br /&gt;
// more Critiques&lt;br /&gt;
&lt;br /&gt;
For their use case, the authors did not state that in counterstrike the user can record a demo of his current game. Some online playing leagues require every player to record his own demo and upload it to the website, where every person in the league can watch it. Without this demo the team lost the match immediately. &lt;br /&gt;
Additionally, some leagues require the player to start an extra program (e.g. Electronic Sports League WIRE), which checks the programs running in the background. It also takes random snapshots of the current player and compresses all information into a file and uploads it to one of the server in the online league, where it can be checked by any player.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
 You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
[1] B. Cully, G. Lefebvre, D. Meyer, M. Feeley, N. Hutchinson, and&lt;br /&gt;
A. Warfield. Remus: High availability via asynchronous virtual&lt;br /&gt;
machine replication. In Proceedings of the USENIX Symposium&lt;br /&gt;
on Networked Systems Design and Implementation (NSDI), Apr.&lt;br /&gt;
2008.&lt;br /&gt;
&lt;br /&gt;
[2] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
[3] G. Hoglund. 4.5 million copies of EULA-compliant spyware.&lt;br /&gt;
http://www.rootkit.com/blog.php?newsid=358.&lt;br /&gt;
&lt;br /&gt;
[4] PunkBuster web site. http://www.evenbalance.com/.&lt;br /&gt;
&lt;br /&gt;
[5] N. E. Baughman, M. Liberatore, and B. N. Levine. Cheat-proof&lt;br /&gt;
playout for centralized and peer-to-peer gaming. IEEE/ACM&lt;br /&gt;
Transactions on Networking (ToN), 15(1):1–13, Feb. 2007.&lt;br /&gt;
&lt;br /&gt;
[6] C. M¨onch, G. Grimen, and R. Midtstraum. Protecting online&lt;br /&gt;
games against cheating. In Proceedings of the Workshop on Network&lt;br /&gt;
and Systems Support for Games (NetGames), Oct. 2006.&lt;br /&gt;
&lt;br /&gt;
[7] A. Haeberlen, P. Kuznetsov, and P. Druschel. PeerReview: Practical&lt;br /&gt;
accountability for distributed systems. In Proceedings of&lt;br /&gt;
the ACM Symposium on Operating Systems Principles (SOSP),Oct. 2007.&lt;br /&gt;
&lt;br /&gt;
[8] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
[9] VMWare Workstation 6.5.1 web site. http://www.vmware.com/products/workstation/&lt;br /&gt;
&lt;br /&gt;
[10] Counter-Strike http://store.steampowered.com/app/10/&lt;br /&gt;
&lt;br /&gt;
[12] Larry L. Peterson and Bruce S. Davie. Computer Networks a Systems Approach, 2007&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
 We can use this area to discuss or leave notes on general ideas or whatever you want to write here.&lt;br /&gt;
&lt;br /&gt;
-The current due date posted on the site for this essay is November 25th  --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-I think that since we are given the headings to this article, we can easily choose what parts each member would like to work on, obviously since there are more members than parts, multiple members will have to work on the same parts or can work on all parts, I guess it&#039;s really up to you. I know that most people have a lot of projects coming up so let&#039;s try to get this done asap, or at least bit by bit so it&#039;s not something we have to worry too much about. --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I would like to do the Contribution or Critique. -- [[User:Sschnei1|Sschnei1]] 02:40, 20 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I can either work on Background Concepts, or Research problem. -[[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I&#039;m not sure whether the background concepts should be in point form or a paragraph, and whether it needs to be very long or not, but I shall work on both background concepts and research problem with you Jbaubin. --[[User:Mchou2|Mchou2]] 18:11, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-Sounds good, and As i was going to post what I had for research problem, I just saw you posted a big chunk of it. I&#039;ll be out for a while, but tonight I&#039;ll take a serious look at what you write and add what I had written. - [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
- Sorry I didn&#039;t write anything yet to Critique. I&#039;m making my notes and will post something tonight or tomorrow. -- [[User:Sschnei1|Sschnei1]] 14:50, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I have started work on the contribution section. I&#039;ll have something up today or tomorrow. --[[User:Hirving|Hirving]] 19:55, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-if anyone has information that they are working on they can just post it up and at least others can look at it and maybe build up stuff on it, and I&#039;m sure everyone is aware of the extension that we got also, but let&#039;s try to finish this in the next few days --[[User:Mchou2|Mchou2]] 20:43, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I agree with finishing it in the next few days. Then we have more time to focus on other courses like 3004. I will post something later that night. -- [[User:Sschnei1|Sschnei1]] 21:29, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- Just added my contribution section, can someone proof read and sign it before I move it over to the essay. I didn&#039;t do the &amp;quot;why is it better&amp;quot; part because I found the implementation took a lot of writing. For anyone that wants to do the other part, I&#039;d suggest comparing AVMs to PunkBuster and/or VAC, and a cloud computing service (focusing on the auditing). Cheers --[[User:Hirving|Hirving]] 19:44, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I started that what is better/worse part in the Critique section. I will add the comparison with AVMs to Punkbuster and/or VAC soon. I personally feel like there is not that much to write for the Critique section. -- [[User:Sschnei1|Sschnei1]] 20:39, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-Hay. I got a bit to add to your Critique section section. Its mostly expanding on your last paragraph and a bit on how the tests were performed. ill post my stuff later tonight, I just need to find some sources for my argument.--[[User:Pcox|Pcox]] 01:06, 25 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5688</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5688"/>
		<updated>2010-11-29T13:06:08Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Critique */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Group Essay 2 =&lt;br /&gt;
&lt;br /&gt;
Hello Group. Please post your information here. I assume everybody read the email at your connect account. Anyone specific wants to send him the email with the group members inside? If not, I just go ahead tomorrow at about 13:00 and send the email with the group members who wrote their contact information in here. - [[User:Sschnei1|Sschnei1]] 03:25, 15 November 2010 (UTC)&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider sschnei1@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Matthew Chou mchou2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Mark Walts mwalts@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Henry Irving hirving@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Jean-Benoit Aubin jbaubin@connect.carleton.ca &lt;br /&gt;
&lt;br /&gt;
Pradhan Nishant npradhan npradhan@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Only Paul Cox didn&#039;t answer i sent this morning. &lt;br /&gt;
&lt;br /&gt;
Cox     Paul    pcox&lt;br /&gt;
&lt;br /&gt;
And I just sent an email to the teacher. &lt;br /&gt;
&lt;br /&gt;
--Jean-Benoit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Paper==&lt;br /&gt;
&lt;br /&gt;
 the paper&#039;s title, authors, and their affiliations. Include a link to the paper and any particularly helpful supplementary information.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Title:&#039;&#039;&#039; Accountable Virtual Machines&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039; Andreas Haeberlen, Paarijaat Aditya, Rodrigo Rodrigues, Peter Druschel&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Affiliates:&#039;&#039;&#039;&lt;br /&gt;
University of Pennsylvania, Max Planck Institute for Software Systems (MPI-SWS)]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Link to Paper:&#039;&#039;&#039; [http://www.usenix.org/events/osdi10/tech/full_papers/Haeberlen.pdf Accountable Virtual Machines]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Supplementary Information:&#039;&#039;&#039; [http://research.microsoft.com/en-us/people/sriram/druschel.pptx Accountable distributed systems and the accountable cloud] - background of similar AVM implementation for distributed systems.&lt;br /&gt;
&lt;br /&gt;
==Background Concepts==&lt;br /&gt;
&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountable Virtual Machine (AVM)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Deterministic Replay&#039;&#039;&#039;: A machine can record its executions into a file so that it can be replayed in order to see the executions and follow what was happening on the machine. Remus [[#References | [1]]] has contributed a highly efficient snap-shotting mechanism for these replays.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountability:&#039;&#039;&#039; Accountability in the context of this paper means that every action done on the virtual machine is recorded and will be used against the machine or user to verify the correctness of the application. The AVM is responsible of its action and will answers for its action against an auditor. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Remote Fault Detection:&#039;&#039;&#039; There are programs like GridCop[[#References | [2]]] that can be used to monitor the progress  and execution of a remotely executing program by requesting a beacon packet. When the remote computer is sending the packets, the receiving/logging computer must be a trusted computer (hardware,software, OS) so that the receiving of packets remains consistent. To detect a fault in a remote system, every packet must arrive safely, and any interrupts during the logging must be handled or the inconsistencies will result in an inaccurate outcome. The AVM does not require trusted hardware and can be used over wide-area networks.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cheat Detection:&#039;&#039;&#039; Cheating in games or any specific modification in a program can be either scanned[[#References | [3][4]]] for or prevented[[#References | [5][6]]] by certain programs. The issue with these scanning and preventative software is the knowledge/awareness of specific cheats or situations that the software can handle. An AVM is designed to counter any kind of general cheat.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Integrity Violations:&#039;&#039;&#039; This refers how the consistency of normal/expected operations of an execution does not equal to that of the host/reference (Trusted) execution, hence a violation has occurred.&lt;br /&gt;
&lt;br /&gt;
- The word &amp;quot;node&amp;quot; is used to refer to a computer or server in order to represent the interactions between one computer and another, or a computer and a server.&lt;br /&gt;
&lt;br /&gt;
==Research problem== &lt;br /&gt;
&lt;br /&gt;
 What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
**Possible alternative  for the first part : &lt;br /&gt;
&lt;br /&gt;
The research presented in this paper tries to tackle a problem that has haunted computer scientists for a long time. How can you be sure that the software running on a remote machine is working correctly or as intended. Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a trust relation between users and a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done would be independent of the node and only dependent on the intended software. Let&#039;s say, that node A interacts with node B with execution exe1 and node A interacts with node C also with ex1, but node C has been modified and respond with exe2. Thus, we can assume that the respond of B and C will be different. Being able to prove that the node C has been modified without any doubt is the purpose of this paper. &lt;br /&gt;
***Let me know what you think about it. I removed the redundant part, and I think made it clearer and more concise. [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
** looks good to me, we&#039;ll put this part into the final essay instead of mine below --[[User:Mchou2|Mchou2]] 20:03, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
/// omit&lt;br /&gt;
&lt;br /&gt;
Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a system of trust between users and a host. These different examples must have a certain amount of trust between the interactions of one user and another, as well as the user interacting with a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done with node A is the same it would be done with another node, node B. Let&#039;s say for example that node A interacts with node B with execution exe1, now when node A and B interact with node C, they would both expect to interact with execution exe1, but what happens if node C interacts differently and executes with exe2, then it would be beneficial to be notified of this difference. The previous explanation might not seem too relevant without some examples, such as; Node A is playing a game with node B, the game executed on node B is the same as on A, now when node A plays with node C, node C is executing the same operations as node A plus a cheating program; when node A buys some products from node B&#039;s server, the server processes the order and then deletes node A&#039;s sensitive information, denoted by execution 1, now when node A buys from node C&#039;s server, the order is processed as well as the sensitive information that node A has provided is also rerouted to another server so that it can be used without permission. These are only a few examples where the operations in an execution is necessary to be logged and verified. The problem that is trying to be handled here is to create a procedure that can be done so that a node can be known as accountable, and to log the operations in an execution to provide evidence of these faults done by a node. &lt;br /&gt;
&lt;br /&gt;
////&lt;br /&gt;
&lt;br /&gt;
Previous work that has been done in efforts to prevent or detect integrity violations can be separated into different categories of operations. The first would be Cheat Detection, where in many different games there are cheats that users use to usually create benefits for themselves that was not intended by the original game.[[#References |[4]]] These detectors are not dynamic, in the sense that they do not actually detect whether a cheat is being used, more so they are checking if there is a cheating operation that they have logged before, being operated on the user&#039;s system. For example, if there was a known cheating program named aimbot.exe that can be run in the background of a game such as CounterStrike, and the PunkBuster system that was implemented on the user&#039;s system had the aimbot.exe program already logged as a cheating program from the developers, the PunkBuster program might notify the current game servers of this or even prevent the user from playing any games until the aimbot.exe operation is no longer running. &lt;br /&gt;
&lt;br /&gt;
Accountability is another important problem that many have already worked on. The main goal of an accountable system is to be able to determine without a doubt that node is faulty and can prove it with solid evidence. It can also be used to defend a node when threatened with false accusation. Numerous systems already use accountability in their system, but they were mostly all linked to specific applications, where a point of reference must be used to compare. As example PeerReview[[#References |[7]]], which is a system closely related to what the research team have worked on,   must be implemented into the application which makes it less portable and cannot be implemented as easily as an AVM. PeerReview verifies the inbound and outbound packets and can see if the software is running as intended. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another problem that is related to the paper is remote fault detection in a distributed system. How can we determine if a remote node is running the code correctly or if the machine itself is working as intended. Network activity is a common solution to this problem, as they look at the inbound and outbound of the node. This can let them know how the software is operating, or in the case of AVM how the whole virtual machine is working. Gridcop[[#References |[8]]] is another example that inspects a small number of packets periodically.  Another way of determining the fault remotely is to use a trusted node,  where it can tell immediately if a fault occurs or a modification is made where it should not have been made. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-and anything else you would to add or modify, or leave a note in the discussion sections if you want me to relook or change something. --[[User:Mchou2|Mchou2]] 20:10, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The problem of logging and auditing the processes of an execution of a specific node (computer) is greatly dependent on the work done for deterministic replay. Deterministic replay programs can create a log file that can be used to replay the operations done for some execution that occurs on a node. Replaying the operations done on the node can show what the node was doing, and this would seem like it is sufficient in finding out whether a node was causing integrity violations or not. The concept of snap-shoting/recording the operations is not the issue with deterministic replay, it is the fact that the data being outputted into the replay may be tampered with by the node itself so that it generates optimal results in replay. By faking the results of the operations, the auditing computer will falsely believe that the tested computer is running all operations as normal. The logging operations done by these recording programs can be directly related to the work needed to detect integrity violations.&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
&lt;br /&gt;
 What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The accountable virtual machine (AVM), that was proposed in this essay, most useful contribution was the implementation of the accountable virtual machine monitor (AVMM). It is what allows for the fault checking of virtual machines in a cloud computing environment. The AVMM can be broken down into different parts: the virtual machine monitor (VMM), the temper-evident log, and auditing mechanisms.  The VMM is based off the VMM found in VMWare Workstation 6.5.1[[#References |[9]]], the temper-evident log was adapted from code in PeerReview[[#References |[7]]], and the audit tools were built up from scratch. &lt;br /&gt;
&lt;br /&gt;
The accountable virtual machine monitor relies on four assumptions:&lt;br /&gt;
&lt;br /&gt;
1. All transmitted messages are received, retransmitted if needed.&lt;br /&gt;
&lt;br /&gt;
2. Machines and Users have access to a hash function that is pre-image resistant, second pre-image resistant, and collision resistant.&lt;br /&gt;
&lt;br /&gt;
3. All parties have a certified keypair, that can be used to sign messages.&lt;br /&gt;
&lt;br /&gt;
4. To audit a log, the user has a reference copy of the VM used.&lt;br /&gt;
The job of the AVMM is to record all incoming and outgoing messages to a tamper-evident log&lt;br /&gt;
and enough info of the execution to enable deterministic replay. &lt;br /&gt;
&lt;br /&gt;
The AVMM must record nondeterministic inputs (such as hardware interrupts), because the input is asynchronous, and the exact timing of input must be recorded so the inputs can be  injected at the same moment during the replay. Wall-clock time is not accurate enough for this recording, so the AVMM must use a combination of instruction pointer, branch counter, and additional registers. Not all inputs have to be recorded this way (software interrupts) because they send requests to the AVM, which will be issued again during replay.     &lt;br /&gt;
&lt;br /&gt;
Two parallels streams appear in the tamper-evident log: message exchanges and nondeterministic inputs. &lt;br /&gt;
It is important for the AVMM to detect inconsistencies between the user&#039;s log and the machine&#039;s log (in case of foul play), so the AVMM simply cross-references messages and inputs during replay, thus, easily detecting any discrepancies.&lt;br /&gt;
&lt;br /&gt;
The AVMM periodically takes snapshots of the AVM&#039;s current state, this facilitates fine-grain audits for the user, but it also increases overhead. The overhead is lowered slightly by the snapshots being incremental (only save the state that has been changed since the last snapshot). The user can authenticate the snapshot using a hash tree of the state (generated by the AVMM) and it can update the hash tree after each snapshot.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tamper-Evident Log&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The log is made up of hash code entries.&lt;br /&gt;
Each log entry in form e = (s,t,c,h)&lt;br /&gt;
s = monotonically increasing sequence number&lt;br /&gt;
t = type&lt;br /&gt;
c = data of the type&lt;br /&gt;
h = hash value&lt;br /&gt;
&lt;br /&gt;
The hash value is calculated by: h = H(hi-1 || s || t || H(c))&lt;br /&gt;
H() is a hash function.&lt;br /&gt;
|| stands for concatenation&lt;br /&gt;
&lt;br /&gt;
Each message sent gets signed with a private key, when the AVMM logs the messages with the signature attached but removes it before sending it to the AVM.   To ensure nonrepudiation, an authenticator is attached to each outgoing message.&lt;br /&gt;
&lt;br /&gt;
To detect when a message is dropped, each party sends an acknowledgement for each message they receive. If an acknowledgement is not received the message is resent a few times, if the user stops receiving messages, then the machine is presumed to have failed.&lt;br /&gt;
&lt;br /&gt;
To preform a log check, the user retrieves a pair of authenticators, then challenges the machine to produce the log segment between the two. The log is computationally infeasible to edit without breaking the hash chain, thus, if the log has been tampered with, the hash chain will be different and the user will notified of the tampering.&lt;br /&gt;
&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Auditing Mechanism&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
From VMM&#039;s perspective all things are deterministic.&lt;br /&gt;
&lt;br /&gt;
To perform a audit, the user:&lt;br /&gt;
&lt;br /&gt;
1. obtains a segment of the machine&#039;s log and the authenticators&lt;br /&gt;
&lt;br /&gt;
2. downloads a snapshot of the AVM at the beginning of the segment&lt;br /&gt;
&lt;br /&gt;
3. replays the entire segment, starting from the snapshot, to verify the events in the log are the correct execution of the software.&lt;br /&gt;
&lt;br /&gt;
The user can verify the execution of software through three different methods: Verifying the log, snapshot, and execution.&lt;br /&gt;
&lt;br /&gt;
When the user wants to verify a log segment, the user retrieves the authenticators from the machine with the sequence numbers in the range of the log segment. The user then downloads the log segment from the machine, and, starting with the most recent snapshot before the log segment and ending with the most recent snapshot before the end of the log segment. The user then checks the authenticators for tampering. If this step proceeds, the user can assume the log segment executed properly. If the machine is faulty, the segment will be unavailable to download or may return a corrupted log segment. This can be used to convince a third party of the fault.&lt;br /&gt;
&lt;br /&gt;
When the user wants to verify the snapshot, the user obtains a snapshot of the AVM&#039;s state at the beginning of the log segment. The user then downloads a snapshot from the machine and the AVMM recomputes the hash tree. The new hash tree is compared to the hash tree contained in the orignal log segment. If any discrepancies are detected, the user can use this to convince a third party of the machine&#039;s faults.&lt;br /&gt;
&lt;br /&gt;
In order for the user to verifying the execution of a log segment, the user needs three inputs: the log segment, the snapshot, and the public keys of the machine and any users of the machine. The auditing tool performs two checks on the log segment, a syntactic check (determines if log is well-formed), and a semantic check (determines if the information in the log shows the correct execution of the machine).&lt;br /&gt;
&lt;br /&gt;
The syntactic check checks whether all log entries are in the proper format, the signatures in each message and acknowledgement, if each message was acknowledged, and the sequence of sent and received messages is correct when compared to the sequence of messages that enter and exit the AVM.&lt;br /&gt;
&lt;br /&gt;
The semantic check creates a local VM that will execute the machine&#039;s log segment, the VM is initialized with a snapshot from the machine if possible. The local VM then runs the log segment and the data is recorded. The auditing tool then checks the log segments, inputs, outputs, and verification of snapshot hashes of the replayed execution against the original log. If any discrepancies are detected then the fault is reported and can be used as evidence against the machine.&lt;br /&gt;
&lt;br /&gt;
Why is it better?&lt;br /&gt;
[To Do]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I read through it and fixed a few missing letters here and there, so if someone else could read it as well and then sign under me we can probably move it to the essay. Thanks . --[[User:Mchou2|Mchou2]] 23:53, 25 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-I just read it and fixed some small parts. Looks good. --[[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// first part of my writing; this is just part1 [[User:Sschnei1|Sschnei1]] 00:35, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
For the comprehension of the reader, it is important of a paper/article/essay to have a good overview/layout. The introduction clearly describes what the reader has to expect in the following pages, especially what problems are addressed and how they are solved. &lt;br /&gt;
&lt;br /&gt;
This paper gives multiple examples about advantages and disadvantages in an AVM. A good example is &amp;quot;Cheat Detection&amp;quot;. Cheaters use programs to go around the original game code to gain an major advantage over other players. Since an AVM is generic in cheat detection it has a wider support for detecting cheats than most of the other cheat detection algorithms. The logs give the game the function to replay the game. Thus, players using AVM can see the way other players play by replaying the game with the player&#039;s log.&lt;br /&gt;
&lt;br /&gt;
The negative side is that the player might have to suffer from the AVM. Everything is being logged and stored on the hard drive, which takes a lot amount of space. In the example in the paper it is 148mb per hour after compression. This reduces the fps. Additionally, the connection to the AVM increases the ping time to the server. &lt;br /&gt;
&lt;br /&gt;
The test case for the AVM was using it to detect people using cheats in the popular online game Counter-Strike. They were using “Dell Precision T1500 workstations, with 8 GB of memory and 2.8 GHz Intel Core i7 860 CPUs”[pg 10]. These machines are considerably more high powered than the system requirements of Counter-Strike, which are “500 MHz processor, 96 MB RAM”[10]. A 10 year old game [10] should use fewer resources on a Dell Precision T1500 workstations. In comparison, newer games consume far more resources than Counter-Strike giving it less room to run the AVM. A 13% slowdown [pg 12.] in a game where you are only getting 30 to 40 fps is a pretty noticeable slowdown. This is very detrimental to the game play because having over 60fps is the optimal performance.&lt;br /&gt;
&lt;br /&gt;
In the paper the authors state that the AVM will only generate an extra 5ms of latency. While this does not seem like a lot the measurement was taken over a LAN with all the computers connected to the same switch [pg. 12]. This sample does not accurately represent real life situations and therefore lacks external validity, since many of these online games are played over the internet with the participants sometimes not even on the same continent; the latency overhead of the AVM would certainly increase due to the added distance. [12]&lt;br /&gt;
&lt;br /&gt;
Additional Critiques:&lt;br /&gt;
&lt;br /&gt;
While the paper does test a slightly larger then one to one scenario, it certainly does not test in a 1:16, 1:32, 1:64 or much higher scenario that would likely exist in a real world application.&lt;br /&gt;
&lt;br /&gt;
In order to keep a lower overhead, spot checking is necessary, and leave a chance of a fault going undetected in a worst case.&lt;br /&gt;
&lt;br /&gt;
More and more programs are using more then one cpu core, which cannot be efficiently deterministically logged at this time. Fortunately it has been shown to be possible if with a large overhead, and could potentially be reasonable at a later date.&lt;br /&gt;
&lt;br /&gt;
The paper repeatedly claims that AVM&#039;s could be used for arbitrary applications but only ever shows evidence of one, counterstrike.&lt;br /&gt;
&lt;br /&gt;
AVM&#039;s are only extremely effective against one type of cheating, that which gives incorrect networking messages. While it was shown in the paper to be effective at catching current cheat programs that require installation on the VM, those could be evolved to exist on the hostmachine, and avoid the issue of a AVM entirely. Further, since an AVM wouldn&#039;t even catch installing a cheat program as faulty without disabling installation while in use, no installation/updating can go on while the program is in use, which may not be desirable.&lt;br /&gt;
&lt;br /&gt;
AVM&#039;s will not in any way catch any bug or exploit in a program that a malicious user could exploit, as the exploit would appear on both user/monitor systems and perform the same.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
 You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
[1] B. Cully, G. Lefebvre, D. Meyer, M. Feeley, N. Hutchinson, and&lt;br /&gt;
A. Warfield. Remus: High availability via asynchronous virtual&lt;br /&gt;
machine replication. In Proceedings of the USENIX Symposium&lt;br /&gt;
on Networked Systems Design and Implementation (NSDI), Apr.&lt;br /&gt;
2008.&lt;br /&gt;
&lt;br /&gt;
[2] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
[3] G. Hoglund. 4.5 million copies of EULA-compliant spyware.&lt;br /&gt;
http://www.rootkit.com/blog.php?newsid=358.&lt;br /&gt;
&lt;br /&gt;
[4] PunkBuster web site. http://www.evenbalance.com/.&lt;br /&gt;
&lt;br /&gt;
[5] N. E. Baughman, M. Liberatore, and B. N. Levine. Cheat-proof&lt;br /&gt;
playout for centralized and peer-to-peer gaming. IEEE/ACM&lt;br /&gt;
Transactions on Networking (ToN), 15(1):1–13, Feb. 2007.&lt;br /&gt;
&lt;br /&gt;
[6] C. M¨onch, G. Grimen, and R. Midtstraum. Protecting online&lt;br /&gt;
games against cheating. In Proceedings of the Workshop on Network&lt;br /&gt;
and Systems Support for Games (NetGames), Oct. 2006.&lt;br /&gt;
&lt;br /&gt;
[7] A. Haeberlen, P. Kuznetsov, and P. Druschel. PeerReview: Practical&lt;br /&gt;
accountability for distributed systems. In Proceedings of&lt;br /&gt;
the ACM Symposium on Operating Systems Principles (SOSP),Oct. 2007.&lt;br /&gt;
&lt;br /&gt;
[8] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
[9] VMWare Workstation 6.5.1 web site. http://www.vmware.com/products/workstation/&lt;br /&gt;
&lt;br /&gt;
[10] Counter-Strike http://store.steampowered.com/app/10/&lt;br /&gt;
&lt;br /&gt;
[12] Larry L. Peterson and Bruce S. Davie. Computer Networks a Systems Approach, 2007&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
 We can use this area to discuss or leave notes on general ideas or whatever you want to write here.&lt;br /&gt;
&lt;br /&gt;
-The current due date posted on the site for this essay is November 25th  --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-I think that since we are given the headings to this article, we can easily choose what parts each member would like to work on, obviously since there are more members than parts, multiple members will have to work on the same parts or can work on all parts, I guess it&#039;s really up to you. I know that most people have a lot of projects coming up so let&#039;s try to get this done asap, or at least bit by bit so it&#039;s not something we have to worry too much about. --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I would like to do the Contribution or Critique. -- [[User:Sschnei1|Sschnei1]] 02:40, 20 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I can either work on Background Concepts, or Research problem. -[[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I&#039;m not sure whether the background concepts should be in point form or a paragraph, and whether it needs to be very long or not, but I shall work on both background concepts and research problem with you Jbaubin. --[[User:Mchou2|Mchou2]] 18:11, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-Sounds good, and As i was going to post what I had for research problem, I just saw you posted a big chunk of it. I&#039;ll be out for a while, but tonight I&#039;ll take a serious look at what you write and add what I had written. - [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
- Sorry I didn&#039;t write anything yet to Critique. I&#039;m making my notes and will post something tonight or tomorrow. -- [[User:Sschnei1|Sschnei1]] 14:50, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I have started work on the contribution section. I&#039;ll have something up today or tomorrow. --[[User:Hirving|Hirving]] 19:55, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-if anyone has information that they are working on they can just post it up and at least others can look at it and maybe build up stuff on it, and I&#039;m sure everyone is aware of the extension that we got also, but let&#039;s try to finish this in the next few days --[[User:Mchou2|Mchou2]] 20:43, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I agree with finishing it in the next few days. Then we have more time to focus on other courses like 3004. I will post something later that night. -- [[User:Sschnei1|Sschnei1]] 21:29, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- Just added my contribution section, can someone proof read and sign it before I move it over to the essay. I didn&#039;t do the &amp;quot;why is it better&amp;quot; part because I found the implementation took a lot of writing. For anyone that wants to do the other part, I&#039;d suggest comparing AVMs to PunkBuster and/or VAC, and a cloud computing service (focusing on the auditing). Cheers --[[User:Hirving|Hirving]] 19:44, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I started that what is better/worse part in the Critique section. I will add the comparison with AVMs to Punkbuster and/or VAC soon. I personally feel like there is not that much to write for the Critique section. -- [[User:Sschnei1|Sschnei1]] 20:39, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-Hay. I got a bit to add to your Critique section section. Its mostly expanding on your last paragraph and a bit on how the tests were performed. ill post my stuff later tonight, I just need to find some sources for my argument.--[[User:Pcox|Pcox]] 01:06, 25 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5531</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5531"/>
		<updated>2010-11-24T20:39:54Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Group Essay 2 =&lt;br /&gt;
&lt;br /&gt;
Hello Group. Please post your information here. I assume everybody read the email at your connect account. Anyone specific wants to send him the email with the group members inside? If not, I just go ahead tomorrow at about 13:00 and send the email with the group members who wrote their contact information in here. - [[User:Sschnei1|Sschnei1]] 03:25, 15 November 2010 (UTC)&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider sschnei1@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Matthew Chou mchou2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Mark Walts mwalts@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Henry Irving hirving@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Jean-Benoit Aubin jbaubin@connect.carleton.ca &lt;br /&gt;
&lt;br /&gt;
Pradhan Nishant npradhan npradhan@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Only Paul Cox didn&#039;t answer i sent this morning. &lt;br /&gt;
&lt;br /&gt;
Cox     Paul    pcox&lt;br /&gt;
&lt;br /&gt;
And I just sent an email to the teacher. &lt;br /&gt;
&lt;br /&gt;
--Jean-Benoit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Paper==&lt;br /&gt;
&lt;br /&gt;
 the paper&#039;s title, authors, and their affiliations. Include a link to the paper and any particularly helpful supplementary information.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Title:&#039;&#039;&#039; Accountable Virtual Machines&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039; Andreas Haeberlen, Paarijaat Aditya, Rodrigo Rodrigues, Peter Druschel&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Affiliates:&#039;&#039;&#039;&lt;br /&gt;
University of Pennsylvania, Max Planck Institute for Software Systems (MPI-SWS)]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Link to Paper:&#039;&#039;&#039; [http://www.usenix.org/events/osdi10/tech/full_papers/Haeberlen.pdf Accountable Virtual Machines]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Supplementary Information:&#039;&#039;&#039; [http://research.microsoft.com/en-us/people/sriram/druschel.pptx Accountable distributed systems and the accountable cloud] - background of similar AVM implementation for distributed systems.&lt;br /&gt;
&lt;br /&gt;
==Background Concepts==&lt;br /&gt;
&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountable Virtual Machine (AVM)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Deterministic Replay&#039;&#039;&#039;: A machine can record its executions into a file so that it can be replayed in order to see the executions and follow what was happening on the machine. Remus [[#References | [1]]] has contributed a highly efficient snap-shotting mechanism for these replays.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountability:&#039;&#039;&#039; Accountability in the context of this paper means that every action done on the virtual machine is recorded and will be used against the machine or user to verify the correctness of the application. The AVM is responsible of its action and will answers for its action against an auditor. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Remote Fault Detection:&#039;&#039;&#039; There are programs like GridCop[[#References | [2]]] that can be used to monitor the progress  and execution of a remotely executing program by requesting a beacon packet. When the remote computer is sending the packets, the receiving/logging computer must be a trusted computer (hardware,software, OS) so that the receiving of packets remains consistent. To detect a fault in a remote system, every packet must arrive safely, and any interrupts during the logging must be handled or the inconsistencies will result in an inaccurate outcome. The AVM does not require trusted hardware and can be used over wide-area networks.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cheat Detection:&#039;&#039;&#039; Cheating in games or any specific modification in a program can be either scanned[[#References | [3][4]]] for or prevented[[#References | [5][6]]] by certain programs. The issue with these scanning and preventative software is the knowledge/awareness of specific cheats or situations that the software can handle. An AVM is designed to counter any kind of general cheat.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Integrity Violations:&#039;&#039;&#039; This refers how the consistency of normal/expected operations of an execution does not equal to that of the host/reference (Trusted) execution, hence a violation has occurred.&lt;br /&gt;
&lt;br /&gt;
- The word &amp;quot;node&amp;quot; is used to refer to a computer or server in order to represent the interactions between one computer and another, or a computer and a server.&lt;br /&gt;
&lt;br /&gt;
==Research problem== &lt;br /&gt;
&lt;br /&gt;
 What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
**Possible alternative  for the first part : &lt;br /&gt;
&lt;br /&gt;
The research presented in this paper tries to tackle a problem that has haunted computer scientists for a long time. How can you be sure that the software running on a remote machine is working correctly or as intended. Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a trust relation between users and a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done would be independent of the node and only dependent on the intended software. Let&#039;s say, that node A interacts with node B with execution exe1 and node A interacts with node C also with ex1, but node C has been modified and respond with exe2. Thus, we can assume that the respond of B and C will be different. Being able to prove that the node C has been modified without any doubt is the purpose of this paper. &lt;br /&gt;
***Let me know what you think about it. I removed the redundant part, and I think made it clearer and more concise. [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
** looks good to me, we&#039;ll put this part into the final essay instead of mine below --[[User:Mchou2|Mchou2]] 20:03, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
/// omit&lt;br /&gt;
&lt;br /&gt;
Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a system of trust between users and a host. These different examples must have a certain amount of trust between the interactions of one user and another, as well as the user interacting with a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done with node A is the same it would be done with another node, node B. Let&#039;s say for example that node A interacts with node B with execution exe1, now when node A and B interact with node C, they would both expect to interact with execution exe1, but what happens if node C interacts differently and executes with exe2, then it would be beneficial to be notified of this difference. The previous explanation might not seem too relevant without some examples, such as; Node A is playing a game with node B, the game executed on node B is the same as on A, now when node A plays with node C, node C is executing the same operations as node A plus a cheating program; when node A buys some products from node B&#039;s server, the server processes the order and then deletes node A&#039;s sensitive information, denoted by execution 1, now when node A buys from node C&#039;s server, the order is processed as well as the sensitive information that node A has provided is also rerouted to another server so that it can be used without permission. These are only a few examples where the operations in an execution is necessary to be logged and verified. The problem that is trying to be handled here is to create a procedure that can be done so that a node can be known as accountable, and to log the operations in an execution to provide evidence of these faults done by a node. &lt;br /&gt;
&lt;br /&gt;
////&lt;br /&gt;
&lt;br /&gt;
Previous work that has been done in efforts to prevent or detect integrity violations can be separated into different categories of operations. The first would be Cheat Detection, where in many different games there are cheats that users use to usually create benefits for themselves that was not intended by the original game.[[#References |[4]]] These detectors are not dynamic, in the sense that they do not actually detect whether a cheat is being used, more so they are checking if there is a cheating operation that they have logged before, being operated on the user&#039;s system. For example, if there was a known cheating program named aimbot.exe that can be run in the background of a game such as CounterStrike, and the PunkBuster system that was implemented on the user&#039;s system had the aimbot.exe program already logged as a cheating program from the developers, the PunkBuster program might notify the current game servers of this or even prevent the user from playing any games until the aimbot.exe operation is no longer running. &lt;br /&gt;
&lt;br /&gt;
Accountability is another important problem that many have already worked on. The main goal of an accountable system is to be able to determine without a doubt that node is faulty and can prove it with solid evidence. It can also be used to defend a node when threatened with false accusation. Numerous systems already use accountability in their system, but they were mostly all linked to specific applications, where a point of reference must be used to compare. As example PeerReview[[#References |[7]]], which is a system closely related to what the research team have worked on,   must be implemented into the application which makes it less portable and cannot be implemented as easily as an AVM. PeerReview verifies the inbound and outbound packets and can see if the software is running as intended. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another problem that is related to the paper is remote fault detection in a distributed system. How can we determine if a remote node is running the code correctly or if the machine itself is working as intended. Network activity is a common solution to this problem, as they look at the inbound and outbound of the node. This can let them know how the software is operating, or in the case of AVM how the whole virtual machine is working. Gridcop[[#References |[8]]] is another example that inspects a small number of packets periodically.  Another way of determining the fault remotely is to use a trusted node,  where it can tell immediately if a fault occurs or a modification is made where it should not have been made. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-and anything else you would to add or modify, or leave a note in the discussion sections if you want me to relook or change something. --[[User:Mchou2|Mchou2]] 20:10, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The problem of logging and auditing the processes of an execution of a specific node (computer) is greatly dependent on the work done for deterministic replay. Deterministic replay programs can create a log file that can be used to replay the operations done for some execution that occurs on a node. Replaying the operations done on the node can show what the node was doing, and this would seem like it is sufficient in finding out whether a node was causing integrity violations or not. The concept of snap-shoting/recording the operations is not the issue with deterministic replay, it is the fact that the data being outputted into the replay may be tampered with by the node itself so that it generates optimal results in replay. By faking the results of the operations, the auditing computer will falsely believe that the tested computer is running all operations as normal. The logging operations done by these recording programs can be directly related to the work needed to detect integrity violations.&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
&lt;br /&gt;
 What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The accountable virtual machine (AVM), that was proposed in this essay, most useful contribution was the implementation of the accountable virtual machine monitor (AVMM). It is what allows for the fault checking of virtual machines in a cloud computing environment. The AVMM can be broken down into different parts: the virtual machine monitor (VMM), the temper-evident log, and auditing mechanisms.  The VMM is based off the VMM found in VMWare Workstation 6.5.1[[#References |[9]]], the temper-evident log was adapted from code in PeerReview[[#References |[7]]], and the audit tools were built up from scratch. &lt;br /&gt;
&lt;br /&gt;
The accountable virtual machine monitor relies on four assumptions:&lt;br /&gt;
&lt;br /&gt;
1. All transmitted messages are received, if retransmitted sufficiently often.&lt;br /&gt;
&lt;br /&gt;
2. Machines and Users have access to a has function that is pre-image resistant, second pre-image resistant, and collision resistant.&lt;br /&gt;
&lt;br /&gt;
3. All parties have a certified keypair, that can be used to sign messages.&lt;br /&gt;
&lt;br /&gt;
4. To audit a log, the user has a reference copy of the VM used.&lt;br /&gt;
The job of the AVMM is to record all incoming and outgoing messages to tamper-evident log.&lt;br /&gt;
and enough info of the execution to enable deterministic replay. &lt;br /&gt;
&lt;br /&gt;
The AVMM must record nondeterministic inputs (such as hardware interrupts), because the input is asynchronous, the exact timing of input must be recorded, so that the inputs can be  injected at the same points during replay. Wall-clock time is not accurate enough for this recording, so the AVMM must use a combination of instruction pointer, branch counter, and, possibly, additional registers. Not all inputs have to be recorded this way (software interrupts), because, they send requests to the AVM, which will be issued again during replay.     &lt;br /&gt;
&lt;br /&gt;
Two parallel streams appear in the tamper-evident log: message exchanges and nondeterministic inputs. &lt;br /&gt;
It is important for the AVMM to detect inconsistencies between the user&#039;s log and the machine&#039;s log (in case of foul play), so the AVMM simply cross-references messages and inputs during replay, thus, easily detecting any discrepancies.&lt;br /&gt;
&lt;br /&gt;
The AVMM periodically takes snapshots of the AVM&#039;s current state, this facilitates fine-grain audits for the user, but it also increases overhead. The overhead is lowered slightly by the snapshots being incremental (only save the state that has been changed since the last snapshot). The user can authenticate the snapshot using a hash tree of the state (generated by the AVMM), the  AVMM updates the hash tree after each snapshot.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tamper-Evident Log&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The log is made up of hash code entries.&lt;br /&gt;
Each log entry in form e = (s,t,c,h)&lt;br /&gt;
s = monotonically increasing sequence number&lt;br /&gt;
t = type&lt;br /&gt;
c = data of the type&lt;br /&gt;
h = hash value&lt;br /&gt;
&lt;br /&gt;
The hash value is calculated by: h = H(hi-1 || s || t || H(c))&lt;br /&gt;
H() is a hash function.&lt;br /&gt;
|| stands for concatenation&lt;br /&gt;
&lt;br /&gt;
Each message sent gets signed with a private key, when the AVMM logs the messages with the signature attached but removes it before sending it to the AVM.   To ensure nonrepudiation, an authenticator is attached to each outgoing message.&lt;br /&gt;
&lt;br /&gt;
To detect when a message is dropped, each party sends an acknowledgement for each message they receive. If an acknowledgement is not received the message is resent a few times, if the user stops receiving messages, then the machine is presumed to of failed.&lt;br /&gt;
&lt;br /&gt;
To preform a log check, the user retrieves a pair of authenticators, then challenges the machine to produce the log segment between the two. The log is computationally infeasible to edit without breaking the hash chain, thus, if the log has been tampered with the hash chain will be different and the user will notified of the tampering.&lt;br /&gt;
&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Auditing Mechanism&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
From VVM&#039;s perspective all things are deterministic.&lt;br /&gt;
&lt;br /&gt;
To preform a audit the user:&lt;br /&gt;
&lt;br /&gt;
1. obtains a segment of the machine&#039;s log and the authenticators&lt;br /&gt;
&lt;br /&gt;
2. downloads a snapshot of the AVM at the beginning of the segment&lt;br /&gt;
&lt;br /&gt;
3. replays the entire segment, starting from the snapshot, to verify the events in the log are the correct execution of the software.&lt;br /&gt;
&lt;br /&gt;
The user can verify the execution of software through three different methods: Verifying the log, snapshot, and execution.&lt;br /&gt;
&lt;br /&gt;
When the user wants to verify a log segment, the user retrieves the authenticators from the machine with the sequence numbers in the range of the log segment. The user then downloads the log segment from the machine, and, starting with the most recent snapshot before the log segment and ending with the most recent snapshot before the end of the log segment. The user then check the authenticators for tampering. If this step proceeds, the user can assume the log segment executed properly. If the machine is faulty, the segment will be unavaible to download or may return a corrupted log segment. This can be used to convince a third party of the fault.&lt;br /&gt;
&lt;br /&gt;
When the user wants to verify the snapshot, the user obtains a snapshot of the AVM&#039;s state at the beginning of the log segment. The user then downloads a snapshot from the machine and the AVMM recomputes the hash tree. The new hash tree is compared to the hash tree contained in the orignal log segment. If any discrepancies are detected, the user can use this to convince a third party of machine&#039;s fault.&lt;br /&gt;
&lt;br /&gt;
In order for the user to verifying the execution of a log segment, the user needs three inputs: the log segment, the snapshot, and the public keys of the machine and any users of the machine. The auditing tool performs two checks on the log segment, a syntactic check (determines if log is well-formed), and a semantic check (determines if the information in the log shows the correct execution of the machine).&lt;br /&gt;
&lt;br /&gt;
The syntactic check checks whether all log entries are in the proper format, the signatures in each message and acknowledgement, if each message was acknowledged, and the sequence of sent and received messages is correct when compared to the sequence of messages that enter and exit the AVM.&lt;br /&gt;
&lt;br /&gt;
The semantic check creates a local VM that will execute the machine&#039;s log segment, the VM is initialized with a snapshot from the machine if possible. The local VM then runs the log segment and the data is recorded. The auditing tool then checks the log segments, inputs, outputs, and verification of snapshot hashes of the replayed execution against the original log. If any discrepancies are detected then the fault is reported and can be used as evidence as fault.&lt;br /&gt;
&lt;br /&gt;
Why is it better?&lt;br /&gt;
[To Do]&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// first part of my writing; this is just part1 [[User:Sschnei1|Sschnei1]] 00:35, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
For the comprehension of the reader, it is important of a paper/article/essay to have a good overview/layout. The introduction clearly describes what the reader has to expect in the following pages, especially what problems are addressed and how they are solved. &lt;br /&gt;
&lt;br /&gt;
This paper gives multiple examples about advantages and disadvantages in an AVM. A good example is &amp;quot;Cheat Detection&amp;quot;. Cheaters use programs to go around the original game code to gain an major advantage over other players. Since an AVM is generic in cheat detection it has a wider support for detecting cheats than most of the other cheat detection algorithms. The logs give the game the function to replay the game. Thus, players using AVM can see the way other players play by replaying the game with the player&#039;s log.&lt;br /&gt;
&lt;br /&gt;
The negative side is that the player might have to suffer from the AVM. Everything is being logged and stored on the hard drive, which takes a lot amount of space. In the example in the paper it is 148mb per hour after compression. This reduces the fps. Additionally, the connection to the AVM increases the ping time to the server.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
 You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
[1] B. Cully, G. Lefebvre, D. Meyer, M. Feeley, N. Hutchinson, and&lt;br /&gt;
A. Warfield. Remus: High availability via asynchronous virtual&lt;br /&gt;
machine replication. In Proceedings of the USENIX Symposium&lt;br /&gt;
on Networked Systems Design and Implementation (NSDI), Apr.&lt;br /&gt;
2008.&lt;br /&gt;
&lt;br /&gt;
[2] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
[3] G. Hoglund. 4.5 million copies of EULA-compliant spyware.&lt;br /&gt;
http://www.rootkit.com/blog.php?newsid=358.&lt;br /&gt;
&lt;br /&gt;
[4] PunkBuster web site. http://www.evenbalance.com/.&lt;br /&gt;
&lt;br /&gt;
[5] N. E. Baughman, M. Liberatore, and B. N. Levine. Cheat-proof&lt;br /&gt;
playout for centralized and peer-to-peer gaming. IEEE/ACM&lt;br /&gt;
Transactions on Networking (ToN), 15(1):1–13, Feb. 2007.&lt;br /&gt;
&lt;br /&gt;
[6] C. M¨onch, G. Grimen, and R. Midtstraum. Protecting online&lt;br /&gt;
games against cheating. In Proceedings of the Workshop on Network&lt;br /&gt;
and Systems Support for Games (NetGames), Oct. 2006.&lt;br /&gt;
&lt;br /&gt;
[7] A. Haeberlen, P. Kuznetsov, and P. Druschel. PeerReview: Practical&lt;br /&gt;
accountability for distributed systems. In Proceedings of&lt;br /&gt;
the ACM Symposium on Operating Systems Principles (SOSP),Oct. 2007.&lt;br /&gt;
&lt;br /&gt;
[8] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
[9] VMWare Workstation 6.5.1 web site. http://www.vmware.com/products/workstation/&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
 We can use this area to discuss or leave notes on general ideas or whatever you want to write here.&lt;br /&gt;
&lt;br /&gt;
-The current due date posted on the site for this essay is November 25th  --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-I think that since we are given the headings to this article, we can easily choose what parts each member would like to work on, obviously since there are more members than parts, multiple members will have to work on the same parts or can work on all parts, I guess it&#039;s really up to you. I know that most people have a lot of projects coming up so let&#039;s try to get this done asap, or at least bit by bit so it&#039;s not something we have to worry too much about. --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I would like to do the Contribution or Critique. -- [[User:Sschnei1|Sschnei1]] 02:40, 20 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I can either work on Background Concepts, or Research problem. -[[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I&#039;m not sure whether the background concepts should be in point form or a paragraph, and whether it needs to be very long or not, but I shall work on both background concepts and research problem with you Jbaubin. --[[User:Mchou2|Mchou2]] 18:11, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-Sounds good, and As i was going to post what I had for research problem, I just saw you posted a big chunk of it. I&#039;ll be out for a while, but tonight I&#039;ll take a serious look at what you write and add what I had written. - [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
- Sorry I didn&#039;t write anything yet to Critique. I&#039;m making my notes and will post something tonight or tomorrow. -- [[User:Sschnei1|Sschnei1]] 14:50, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I have started work on the contribution section. I&#039;ll have something up today or tomorrow. --[[User:Hirving|Hirving]] 19:55, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-if anyone has information that they are working on they can just post it up and at least others can look at it and maybe build up stuff on it, and I&#039;m sure everyone is aware of the extension that we got also, but let&#039;s try to finish this in the next few days --[[User:Mchou2|Mchou2]] 20:43, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I agree with finishing it in the next few days. Then we have more time to focus on other courses like 3004. I will post something later that night. -- [[User:Sschnei1|Sschnei1]] 21:29, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- Just added my contribution section, can someone proof read and sign it before I move it over to the essay. I didn&#039;t do the &amp;quot;why is it better&amp;quot; part because I found the implementation took a lot of writing. For anyone that wants to do the other part, I&#039;d suggest comparing AVMs to PunkBuster and/or VAC, and a cloud computing service (focusing on the auditing). Cheers --[[User:Hirving|Hirving]] 19:44, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I started that what is better/worse part in the Critique section. I will add the comparison with AVMs to Punkbuster and/or VAC soon. I personally feel like there is not that much to write for the Critique section. -- [[User:Sschnei1|Sschnei1]] 20:39, 24 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5530</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5530"/>
		<updated>2010-11-24T20:39:09Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Group Essay 2 =&lt;br /&gt;
&lt;br /&gt;
Hello Group. Please post your information here. I assume everybody read the email at your connect account. Anyone specific wants to send him the email with the group members inside? If not, I just go ahead tomorrow at about 13:00 and send the email with the group members who wrote their contact information in here. - [[User:Sschnei1|Sschnei1]] 03:25, 15 November 2010 (UTC)&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider sschnei1@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Matthew Chou mchou2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Mark Walts mwalts@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Henry Irving hirving@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Jean-Benoit Aubin jbaubin@connect.carleton.ca &lt;br /&gt;
&lt;br /&gt;
Pradhan Nishant npradhan npradhan@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Only Paul Cox didn&#039;t answer i sent this morning. &lt;br /&gt;
&lt;br /&gt;
Cox     Paul    pcox&lt;br /&gt;
&lt;br /&gt;
And I just sent an email to the teacher. &lt;br /&gt;
&lt;br /&gt;
--Jean-Benoit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Paper==&lt;br /&gt;
&lt;br /&gt;
 the paper&#039;s title, authors, and their affiliations. Include a link to the paper and any particularly helpful supplementary information.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Title:&#039;&#039;&#039; Accountable Virtual Machines&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039; Andreas Haeberlen, Paarijaat Aditya, Rodrigo Rodrigues, Peter Druschel&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Affiliates:&#039;&#039;&#039;&lt;br /&gt;
University of Pennsylvania, Max Planck Institute for Software Systems (MPI-SWS)]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Link to Paper:&#039;&#039;&#039; [http://www.usenix.org/events/osdi10/tech/full_papers/Haeberlen.pdf Accountable Virtual Machines]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Supplementary Information:&#039;&#039;&#039; [http://research.microsoft.com/en-us/people/sriram/druschel.pptx Accountable distributed systems and the accountable cloud] - background of similar AVM implementation for distributed systems.&lt;br /&gt;
&lt;br /&gt;
==Background Concepts==&lt;br /&gt;
&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountable Virtual Machine (AVM)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Deterministic Replay&#039;&#039;&#039;: A machine can record its executions into a file so that it can be replayed in order to see the executions and follow what was happening on the machine. Remus [[#References | [1]]] has contributed a highly efficient snap-shotting mechanism for these replays.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountability:&#039;&#039;&#039; Accountability in the context of this paper means that every action done on the virtual machine is recorded and will be used against the machine or user to verify the correctness of the application. The AVM is responsible of its action and will answers for its action against an auditor. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Remote Fault Detection:&#039;&#039;&#039; There are programs like GridCop[[#References | [2]]] that can be used to monitor the progress  and execution of a remotely executing program by requesting a beacon packet. When the remote computer is sending the packets, the receiving/logging computer must be a trusted computer (hardware,software, OS) so that the receiving of packets remains consistent. To detect a fault in a remote system, every packet must arrive safely, and any interrupts during the logging must be handled or the inconsistencies will result in an inaccurate outcome. The AVM does not require trusted hardware and can be used over wide-area networks.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cheat Detection:&#039;&#039;&#039; Cheating in games or any specific modification in a program can be either scanned[[#References | [3][4]]] for or prevented[[#References | [5][6]]] by certain programs. The issue with these scanning and preventative software is the knowledge/awareness of specific cheats or situations that the software can handle. An AVM is designed to counter any kind of general cheat.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Integrity Violations:&#039;&#039;&#039; This refers how the consistency of normal/expected operations of an execution does not equal to that of the host/reference (Trusted) execution, hence a violation has occurred.&lt;br /&gt;
&lt;br /&gt;
- The word &amp;quot;node&amp;quot; is used to refer to a computer or server in order to represent the interactions between one computer and another, or a computer and a server.&lt;br /&gt;
&lt;br /&gt;
==Research problem== &lt;br /&gt;
&lt;br /&gt;
 What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
**Possible alternative  for the first part : &lt;br /&gt;
&lt;br /&gt;
The research presented in this paper tries to tackle a problem that has haunted computer scientists for a long time. How can you be sure that the software running on a remote machine is working correctly or as intended. Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a trust relation between users and a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done would be independent of the node and only dependent on the intended software. Let&#039;s say, that node A interacts with node B with execution exe1 and node A interacts with node C also with ex1, but node C has been modified and respond with exe2. Thus, we can assume that the respond of B and C will be different. Being able to prove that the node C has been modified without any doubt is the purpose of this paper. &lt;br /&gt;
***Let me know what you think about it. I removed the redundant part, and I think made it clearer and more concise. [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
** looks good to me, we&#039;ll put this part into the final essay instead of mine below --[[User:Mchou2|Mchou2]] 20:03, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
/// omit&lt;br /&gt;
&lt;br /&gt;
Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a system of trust between users and a host. These different examples must have a certain amount of trust between the interactions of one user and another, as well as the user interacting with a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done with node A is the same it would be done with another node, node B. Let&#039;s say for example that node A interacts with node B with execution exe1, now when node A and B interact with node C, they would both expect to interact with execution exe1, but what happens if node C interacts differently and executes with exe2, then it would be beneficial to be notified of this difference. The previous explanation might not seem too relevant without some examples, such as; Node A is playing a game with node B, the game executed on node B is the same as on A, now when node A plays with node C, node C is executing the same operations as node A plus a cheating program; when node A buys some products from node B&#039;s server, the server processes the order and then deletes node A&#039;s sensitive information, denoted by execution 1, now when node A buys from node C&#039;s server, the order is processed as well as the sensitive information that node A has provided is also rerouted to another server so that it can be used without permission. These are only a few examples where the operations in an execution is necessary to be logged and verified. The problem that is trying to be handled here is to create a procedure that can be done so that a node can be known as accountable, and to log the operations in an execution to provide evidence of these faults done by a node. &lt;br /&gt;
&lt;br /&gt;
////&lt;br /&gt;
&lt;br /&gt;
Previous work that has been done in efforts to prevent or detect integrity violations can be separated into different categories of operations. The first would be Cheat Detection, where in many different games there are cheats that users use to usually create benefits for themselves that was not intended by the original game.[[#References |[4]]] These detectors are not dynamic, in the sense that they do not actually detect whether a cheat is being used, more so they are checking if there is a cheating operation that they have logged before, being operated on the user&#039;s system. For example, if there was a known cheating program named aimbot.exe that can be run in the background of a game such as CounterStrike, and the PunkBuster system that was implemented on the user&#039;s system had the aimbot.exe program already logged as a cheating program from the developers, the PunkBuster program might notify the current game servers of this or even prevent the user from playing any games until the aimbot.exe operation is no longer running. &lt;br /&gt;
&lt;br /&gt;
Accountability is another important problem that many have already worked on. The main goal of an accountable system is to be able to determine without a doubt that node is faulty and can prove it with solid evidence. It can also be used to defend a node when threatened with false accusation. Numerous systems already use accountability in their system, but they were mostly all linked to specific applications, where a point of reference must be used to compare. As example PeerReview[[#References |[7]]], which is a system closely related to what the research team have worked on,   must be implemented into the application which makes it less portable and cannot be implemented as easily as an AVM. PeerReview verifies the inbound and outbound packets and can see if the software is running as intended. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another problem that is related to the paper is remote fault detection in a distributed system. How can we determine if a remote node is running the code correctly or if the machine itself is working as intended. Network activity is a common solution to this problem, as they look at the inbound and outbound of the node. This can let them know how the software is operating, or in the case of AVM how the whole virtual machine is working. Gridcop[[#References |[8]]] is another example that inspects a small number of packets periodically.  Another way of determining the fault remotely is to use a trusted node,  where it can tell immediately if a fault occurs or a modification is made where it should not have been made. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-and anything else you would to add or modify, or leave a note in the discussion sections if you want me to relook or change something. --[[User:Mchou2|Mchou2]] 20:10, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The problem of logging and auditing the processes of an execution of a specific node (computer) is greatly dependent on the work done for deterministic replay. Deterministic replay programs can create a log file that can be used to replay the operations done for some execution that occurs on a node. Replaying the operations done on the node can show what the node was doing, and this would seem like it is sufficient in finding out whether a node was causing integrity violations or not. The concept of snap-shoting/recording the operations is not the issue with deterministic replay, it is the fact that the data being outputted into the replay may be tampered with by the node itself so that it generates optimal results in replay. By faking the results of the operations, the auditing computer will falsely believe that the tested computer is running all operations as normal. The logging operations done by these recording programs can be directly related to the work needed to detect integrity violations.&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
&lt;br /&gt;
 What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The accountable virtual machine (AVM), that was proposed in this essay, most useful contribution was the implementation of the accountable virtual machine monitor (AVMM). It is what allows for the fault checking of virtual machines in a cloud computing environment. The AVMM can be broken down into different parts: the virtual machine monitor (VMM), the temper-evident log, and auditing mechanisms.  The VMM is based off the VMM found in VMWare Workstation 6.5.1[[#References |[9]]], the temper-evident log was adapted from code in PeerReview[[#References |[7]]], and the audit tools were built up from scratch. &lt;br /&gt;
&lt;br /&gt;
The accountable virtual machine monitor relies on four assumptions:&lt;br /&gt;
&lt;br /&gt;
1. All transmitted messages are received, if retransmitted sufficiently often.&lt;br /&gt;
&lt;br /&gt;
2. Machines and Users have access to a has function that is pre-image resistant, second pre-image resistant, and collision resistant.&lt;br /&gt;
&lt;br /&gt;
3. All parties have a certified keypair, that can be used to sign messages.&lt;br /&gt;
&lt;br /&gt;
4. To audit a log, the user has a reference copy of the VM used.&lt;br /&gt;
The job of the AVMM is to record all incoming and outgoing messages to tamper-evident log.&lt;br /&gt;
and enough info of the execution to enable deterministic replay. &lt;br /&gt;
&lt;br /&gt;
The AVMM must record nondeterministic inputs (such as hardware interrupts), because the input is asynchronous, the exact timing of input must be recorded, so that the inputs can be  injected at the same points during replay. Wall-clock time is not accurate enough for this recording, so the AVMM must use a combination of instruction pointer, branch counter, and, possibly, additional registers. Not all inputs have to be recorded this way (software interrupts), because, they send requests to the AVM, which will be issued again during replay.     &lt;br /&gt;
&lt;br /&gt;
Two parallel streams appear in the tamper-evident log: message exchanges and nondeterministic inputs. &lt;br /&gt;
It is important for the AVMM to detect inconsistencies between the user&#039;s log and the machine&#039;s log (in case of foul play), so the AVMM simply cross-references messages and inputs during replay, thus, easily detecting any discrepancies.&lt;br /&gt;
&lt;br /&gt;
The AVMM periodically takes snapshots of the AVM&#039;s current state, this facilitates fine-grain audits for the user, but it also increases overhead. The overhead is lowered slightly by the snapshots being incremental (only save the state that has been changed since the last snapshot). The user can authenticate the snapshot using a hash tree of the state (generated by the AVMM), the  AVMM updates the hash tree after each snapshot.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tamper-Evident Log&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The log is made up of hash code entries.&lt;br /&gt;
Each log entry in form e = (s,t,c,h)&lt;br /&gt;
s = monotonically increasing sequence number&lt;br /&gt;
t = type&lt;br /&gt;
c = data of the type&lt;br /&gt;
h = hash value&lt;br /&gt;
&lt;br /&gt;
The hash value is calculated by: h = H(hi-1 || s || t || H(c))&lt;br /&gt;
H() is a hash function.&lt;br /&gt;
|| stands for concatenation&lt;br /&gt;
&lt;br /&gt;
Each message sent gets signed with a private key, when the AVMM logs the messages with the signature attached but removes it before sending it to the AVM.   To ensure nonrepudiation, an authenticator is attached to each outgoing message.&lt;br /&gt;
&lt;br /&gt;
To detect when a message is dropped, each party sends an acknowledgement for each message they receive. If an acknowledgement is not received the message is resent a few times, if the user stops receiving messages, then the machine is presumed to of failed.&lt;br /&gt;
&lt;br /&gt;
To preform a log check, the user retrieves a pair of authenticators, then challenges the machine to produce the log segment between the two. The log is computationally infeasible to edit without breaking the hash chain, thus, if the log has been tampered with the hash chain will be different and the user will notified of the tampering.&lt;br /&gt;
&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Auditing Mechanism&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
From VVM&#039;s perspective all things are deterministic.&lt;br /&gt;
&lt;br /&gt;
To preform a audit the user:&lt;br /&gt;
&lt;br /&gt;
1. obtains a segment of the machine&#039;s log and the authenticators&lt;br /&gt;
&lt;br /&gt;
2. downloads a snapshot of the AVM at the beginning of the segment&lt;br /&gt;
&lt;br /&gt;
3. replays the entire segment, starting from the snapshot, to verify the events in the log are the correct execution of the software.&lt;br /&gt;
&lt;br /&gt;
The user can verify the execution of software through three different methods: Verifying the log, snapshot, and execution.&lt;br /&gt;
&lt;br /&gt;
When the user wants to verify a log segment, the user retrieves the authenticators from the machine with the sequence numbers in the range of the log segment. The user then downloads the log segment from the machine, and, starting with the most recent snapshot before the log segment and ending with the most recent snapshot before the end of the log segment. The user then check the authenticators for tampering. If this step proceeds, the user can assume the log segment executed properly. If the machine is faulty, the segment will be unavaible to download or may return a corrupted log segment. This can be used to convince a third party of the fault.&lt;br /&gt;
&lt;br /&gt;
When the user wants to verify the snapshot, the user obtains a snapshot of the AVM&#039;s state at the beginning of the log segment. The user then downloads a snapshot from the machine and the AVMM recomputes the hash tree. The new hash tree is compared to the hash tree contained in the orignal log segment. If any discrepancies are detected, the user can use this to convince a third party of machine&#039;s fault.&lt;br /&gt;
&lt;br /&gt;
In order for the user to verifying the execution of a log segment, the user needs three inputs: the log segment, the snapshot, and the public keys of the machine and any users of the machine. The auditing tool performs two checks on the log segment, a syntactic check (determines if log is well-formed), and a semantic check (determines if the information in the log shows the correct execution of the machine).&lt;br /&gt;
&lt;br /&gt;
The syntactic check checks whether all log entries are in the proper format, the signatures in each message and acknowledgement, if each message was acknowledged, and the sequence of sent and received messages is correct when compared to the sequence of messages that enter and exit the AVM.&lt;br /&gt;
&lt;br /&gt;
The semantic check creates a local VM that will execute the machine&#039;s log segment, the VM is initialized with a snapshot from the machine if possible. The local VM then runs the log segment and the data is recorded. The auditing tool then checks the log segments, inputs, outputs, and verification of snapshot hashes of the replayed execution against the original log. If any discrepancies are detected then the fault is reported and can be used as evidence as fault.&lt;br /&gt;
&lt;br /&gt;
Why is it better?&lt;br /&gt;
[To Do]&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// first part of my writing; this is just part1 [[User:Sschnei1|Sschnei1]] 00:35, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
For the comprehension of the reader, it is important of a paper/article/essay to have a good overview/layout. The introduction clearly describes what the reader has to expect in the following pages, especially what problems are addressed and how they are solved. &lt;br /&gt;
&lt;br /&gt;
This paper gives multiple examples about advantages and disadvantages in an AVM. A good example is &amp;quot;Cheat Detection&amp;quot;. Cheaters use programs to go around the original game code to gain an major advantage over other players. Since an AVM is generic in cheat detection it has a wider support for detecting cheats than most of the other cheat detection algorithms. The logs give the game the function to replay the game. Thus, players using AVM can see the way other players play by replaying the game with the player&#039;s log.&lt;br /&gt;
&lt;br /&gt;
The negative side is that the player might have to suffer from the AVM. Everything is being logged and stored on the hard drive, which takes a lot amount of space. In the example in the paper it is 148mb per hour after compression. This reduces the fps. Additionally, the connection to the AVM increases the ping time to the server.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
 You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
[1] B. Cully, G. Lefebvre, D. Meyer, M. Feeley, N. Hutchinson, and&lt;br /&gt;
A. Warfield. Remus: High availability via asynchronous virtual&lt;br /&gt;
machine replication. In Proceedings of the USENIX Symposium&lt;br /&gt;
on Networked Systems Design and Implementation (NSDI), Apr.&lt;br /&gt;
2008.&lt;br /&gt;
&lt;br /&gt;
[2] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
[3] G. Hoglund. 4.5 million copies of EULA-compliant spyware.&lt;br /&gt;
http://www.rootkit.com/blog.php?newsid=358.&lt;br /&gt;
&lt;br /&gt;
[4] PunkBuster web site. http://www.evenbalance.com/.&lt;br /&gt;
&lt;br /&gt;
[5] N. E. Baughman, M. Liberatore, and B. N. Levine. Cheat-proof&lt;br /&gt;
playout for centralized and peer-to-peer gaming. IEEE/ACM&lt;br /&gt;
Transactions on Networking (ToN), 15(1):1–13, Feb. 2007.&lt;br /&gt;
&lt;br /&gt;
[6] C. M¨onch, G. Grimen, and R. Midtstraum. Protecting online&lt;br /&gt;
games against cheating. In Proceedings of the Workshop on Network&lt;br /&gt;
and Systems Support for Games (NetGames), Oct. 2006.&lt;br /&gt;
&lt;br /&gt;
[7] A. Haeberlen, P. Kuznetsov, and P. Druschel. PeerReview: Practical&lt;br /&gt;
accountability for distributed systems. In Proceedings of&lt;br /&gt;
the ACM Symposium on Operating Systems Principles (SOSP),Oct. 2007.&lt;br /&gt;
&lt;br /&gt;
[8] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
[9] VMWare Workstation 6.5.1 web site. http://www.vmware.com/products/workstation/&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
 We can use this area to discuss or leave notes on general ideas or whatever you want to write here.&lt;br /&gt;
&lt;br /&gt;
-The current due date posted on the site for this essay is November 25th  --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-I think that since we are given the headings to this article, we can easily choose what parts each member would like to work on, obviously since there are more members than parts, multiple members will have to work on the same parts or can work on all parts, I guess it&#039;s really up to you. I know that most people have a lot of projects coming up so let&#039;s try to get this done asap, or at least bit by bit so it&#039;s not something we have to worry too much about. --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I would like to do the Contribution or Critique. -- [[User:Sschnei1|Sschnei1]] 02:40, 20 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I can either work on Background Concepts, or Research problem. -[[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I&#039;m not sure whether the background concepts should be in point form or a paragraph, and whether it needs to be very long or not, but I shall work on both background concepts and research problem with you Jbaubin. --[[User:Mchou2|Mchou2]] 18:11, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-Sounds good, and As i was going to post what I had for research problem, I just saw you posted a big chunk of it. I&#039;ll be out for a while, but tonight I&#039;ll take a serious look at what you write and add what I had written. - [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
- Sorry I didn&#039;t write anything yet to Critique. I&#039;m making my notes and will post something tonight or tomorrow. -- [[User:Sschnei1|Sschnei1]] 14:50, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I have started work on the contribution section. I&#039;ll have something up today or tomorrow. --[[User:Hirving|Hirving]] 19:55, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-if anyone has information that they are working on they can just post it up and at least others can look at it and maybe build up stuff on it, and I&#039;m sure everyone is aware of the extension that we got also, but let&#039;s try to finish this in the next few days --[[User:Mchou2|Mchou2]] 20:43, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I agree with finishing it in the next few days. Then we have more time to focus on other courses like 3004. I will post something later that night. -- [[User:Sschnei1|Sschnei1]] 21:29, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- Just added my contribution section, can someone proof read and sign it before I move it over to the essay. I didn&#039;t do the &amp;quot;why is it better&amp;quot; part because I found the implementation took a lot of writing. For anyone that wants to do the other part, I&#039;d suggest comparing AVMs to PunkBuster and/or VAC, and a cloud computing service (focusing on the auditing). Cheers --[[User:Hirving|Hirving]] 19:44, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I started that what is better/worse part in the Critique section. I will add the comparison with AVMs to Punkbuster and/or VAC soon. -- [[User:Sschnei1|Sschnei1]] 20:39, 24 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5483</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5483"/>
		<updated>2010-11-24T00:35:22Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Critique */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Group Essay 2 =&lt;br /&gt;
&lt;br /&gt;
Hello Group. Please post your information here. I assume everybody read the email at your connect account. Anyone specific wants to send him the email with the group members inside? If not, I just go ahead tomorrow at about 13:00 and send the email with the group members who wrote their contact information in here. - [[User:Sschnei1|Sschnei1]] 03:25, 15 November 2010 (UTC)&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider sschnei1@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Matthew Chou mchou2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Mark Walts mwalts@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Henry Irving hirving@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Jean-Benoit Aubin jbaubin@connect.carleton.ca &lt;br /&gt;
&lt;br /&gt;
Pradhan Nishant npradhan npradhan@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Only Paul Cox didn&#039;t answer i sent this morning. &lt;br /&gt;
&lt;br /&gt;
Cox     Paul    pcox&lt;br /&gt;
&lt;br /&gt;
And I just sent an email to the teacher. &lt;br /&gt;
&lt;br /&gt;
--Jean-Benoit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Paper==&lt;br /&gt;
&lt;br /&gt;
 the paper&#039;s title, authors, and their affiliations. Include a link to the paper and any particularly helpful supplementary information.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Title:&#039;&#039;&#039; Accountable Virtual Machines&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039; Andreas Haeberlen, Paarijaat Aditya, Rodrigo Rodrigues, Peter Druschel&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Affiliates:&#039;&#039;&#039;&lt;br /&gt;
University of Pennsylvania, Max Planck Institute for Software Systems (MPI-SWS)]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Link to Paper:&#039;&#039;&#039; [http://www.usenix.org/events/osdi10/tech/full_papers/Haeberlen.pdf Accountable Virtual Machines]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Supplementary Information:&#039;&#039;&#039; [http://research.microsoft.com/en-us/people/sriram/druschel.pptx Accountable distributed systems and the accountable cloud] - background of similar AVM implementation for distributed systems.&lt;br /&gt;
&lt;br /&gt;
==Background Concepts==&lt;br /&gt;
&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountable Virtual Machine (AVM)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Deterministic Replay&#039;&#039;&#039;: A machine can record its executions into a file so that it can be replayed in order to see the executions and follow what was happening on the machine. Remus [[#References | [1]]] has contributed a highly efficient snap-shotting mechanism for these replays.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountability:&#039;&#039;&#039; Accountability in the context of this paper means that every action done on the virtual machine is recorded and will be used against the machine or user to verify the correctness of the application. The AVM is responsible of its action and will answers for its action against an auditor. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Remote Fault Detection:&#039;&#039;&#039; There are programs like GridCop[[#References | [2]]] that can be used to monitor the progress  and execution of a remotely executing program by requesting a beacon packet. When the remote computer is sending the packets, the receiving/logging computer must be a trusted computer (hardware,software, OS) so that the receiving of packets remains consistent. To detect a fault in a remote system, every packet must arrive safely, and any interrupts during the logging must be handled or the inconsistencies will result in an inaccurate outcome. The AVM does not require trusted hardware and can be used over wide-area networks.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cheat Detection:&#039;&#039;&#039; Cheating in games or any specific modification in a program can be either scanned[[#References | [3][4]]] for or prevented[[#References | [5][6]]] by certain programs. The issue with these scanning and preventative software is the knowledge/awareness of specific cheats or situations that the software can handle. An AVM is designed to counter any kind of general cheat.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Integrity Violations:&#039;&#039;&#039; This refers how the consistency of normal/expected operations of an execution does not equal to that of the host/reference (Trusted) execution, hence a violation has occurred.&lt;br /&gt;
&lt;br /&gt;
- The word &amp;quot;node&amp;quot; is used to refer to a computer or server in order to represent the interactions between one computer and another, or a computer and a server.&lt;br /&gt;
&lt;br /&gt;
==Research problem== &lt;br /&gt;
&lt;br /&gt;
 What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
**Possible alternative  for the first part : &lt;br /&gt;
&lt;br /&gt;
The research presented in this paper tries to tackle a problem that has haunted computer scientists for a long time. How can you be sure that the software running on a remote machine is working correctly or as intended. Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a trust relation between users and a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done would be independent of the node and only dependent on the intended software. Let&#039;s say, that node A interacts with node B with execution exe1 and node A interacts with node C also with ex1, but node C has been modified and respond with exe2. Thus, we can assume that the respond of B and C will be different. Being able to prove that the node C has been modified without any doubt is the purpose of this paper. &lt;br /&gt;
***Let me know what you think about it. I removed the redundant part, and I think made it clearer and more concise. [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
** looks good to me, we&#039;ll put this part into the final essay instead of mine below --[[User:Mchou2|Mchou2]] 20:03, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
/// omit&lt;br /&gt;
&lt;br /&gt;
Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a system of trust between users and a host. These different examples must have a certain amount of trust between the interactions of one user and another, as well as the user interacting with a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done with node A is the same it would be done with another node, node B. Let&#039;s say for example that node A interacts with node B with execution exe1, now when node A and B interact with node C, they would both expect to interact with execution exe1, but what happens if node C interacts differently and executes with exe2, then it would be beneficial to be notified of this difference. The previous explanation might not seem too relevant without some examples, such as; Node A is playing a game with node B, the game executed on node B is the same as on A, now when node A plays with node C, node C is executing the same operations as node A plus a cheating program; when node A buys some products from node B&#039;s server, the server processes the order and then deletes node A&#039;s sensitive information, denoted by execution 1, now when node A buys from node C&#039;s server, the order is processed as well as the sensitive information that node A has provided is also rerouted to another server so that it can be used without permission. These are only a few examples where the operations in an execution is necessary to be logged and verified. The problem that is trying to be handled here is to create a procedure that can be done so that a node can be known as accountable, and to log the operations in an execution to provide evidence of these faults done by a node. &lt;br /&gt;
&lt;br /&gt;
////&lt;br /&gt;
&lt;br /&gt;
Previous work that has been done in efforts to prevent or detect integrity violations can be separated into different categories of operations. The first would be Cheat Detection, where in many different games there are cheats that users use to usually create benefits for themselves that was not intended by the original game.[[#References |[4]]] These detectors are not dynamic, in the sense that they do not actually detect whether a cheat is being used, more so they are checking if there is a cheating operation that they have logged before, being operated on the user&#039;s system. For example, if there was a known cheating program named aimbot.exe that can be run in the background of a game such as CounterStrike, and the PunkBuster system that was implemented on the user&#039;s system had the aimbot.exe program already logged as a cheating program from the developers, the PunkBuster program might notify the current game servers of this or even prevent the user from playing any games until the aimbot.exe operation is no longer running. &lt;br /&gt;
&lt;br /&gt;
Accountability is another important problem that many have already worked on. The main goal of an accountable system is to be able to determine without a doubt that node is faulty and can prove it with solid evidence. It can also be used to defend a node when threatened with false accusation. Numerous systems already use accountability in their system, but they were mostly all linked to specific applications, where a point of reference must be used to compare. As example PeerReview[[#References |[7]]], which is a system closely related to what the research team have worked on,   must be implemented into the application which makes it less portable and cannot be implemented as easily as an AVM. PeerReview verifies the inbound and outbound packets and can see if the software is running as intended. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another problem that is related to the paper is remote fault detection in a distributed system. How can we determine if a remote node is running the code correctly or if the machine itself is working as intended. Network activity is a common solution to this problem, as they look at the inbound and outbound of the node. This can let them know how the software is operating, or in the case of AVM how the whole virtual machine is working. Gridcop[[#References |[8]]] is another example that inspects a small number of packets periodically.  Another way of determining the fault remotely is to use a trusted node,  where it can tell immediately if a fault occurs or a modification is made where it should not have been made. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-and anything else you would to add or modify, or leave a note in the discussion sections if you want me to relook or change something. --[[User:Mchou2|Mchou2]] 20:10, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The problem of logging and auditing the processes of an execution of a specific node (computer) is greatly dependent on the work done for deterministic replay. Deterministic replay programs can create a log file that can be used to replay the operations done for some execution that occurs on a node. Replaying the operations done on the node can show what the node was doing, and this would seem like it is sufficient in finding out whether a node was causing integrity violations or not. The concept of snap-shoting/recording the operations is not the issue with deterministic replay, it is the fact that the data being outputted into the replay may be tampered with by the node itself so that it generates optimal results in replay. By faking the results of the operations, the auditing computer will falsely believe that the tested computer is running all operations as normal. The logging operations done by these recording programs can be directly related to the work needed to detect integrity violations.&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
&lt;br /&gt;
 What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// first part of my writing; this is just part1 [[User:Sschnei1|Sschnei1]] 00:35, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
For the comprehension of the reader, it is important of a paper/article/essay to have a good overview/layout. The introduction clearly describes what the reader has to expect in the following pages, especially what problems are addressed and how they are solved. &lt;br /&gt;
&lt;br /&gt;
This paper gives multiple examples about advantages and disadvantages in an AVM. A good example is &amp;quot;Cheat Detection&amp;quot;. Cheaters use programs to go around the original game code to gain an major advantage over other players. Since an AVM is generic in cheat detection it has a wider support for detecting cheats than most of the other cheat detection algorithms. The logs give the game the function to replay the game. Thus, players using AVM can see the way other players play by replaying the game with the player&#039;s log.&lt;br /&gt;
&lt;br /&gt;
The negative side is that the player might have to suffer from the AVM. Everything is being logged and stored on the hard drive, which takes a lot amount of space. In the example in the paper it is 148mb per hour after compression. This reduces the fps. Additionally, the connection to the AVM increases the ping time to the server.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
 You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
[1] B. Cully, G. Lefebvre, D. Meyer, M. Feeley, N. Hutchinson, and&lt;br /&gt;
A. Warfield. Remus: High availability via asynchronous virtual&lt;br /&gt;
machine replication. In Proceedings of the USENIX Symposium&lt;br /&gt;
on Networked Systems Design and Implementation (NSDI), Apr.&lt;br /&gt;
2008.&lt;br /&gt;
&lt;br /&gt;
[2] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
[3] G. Hoglund. 4.5 million copies of EULA-compliant spyware.&lt;br /&gt;
http://www.rootkit.com/blog.php?newsid=358.&lt;br /&gt;
&lt;br /&gt;
[4] PunkBuster web site. http://www.evenbalance.com/.&lt;br /&gt;
&lt;br /&gt;
[5] N. E. Baughman, M. Liberatore, and B. N. Levine. Cheat-proof&lt;br /&gt;
playout for centralized and peer-to-peer gaming. IEEE/ACM&lt;br /&gt;
Transactions on Networking (ToN), 15(1):1–13, Feb. 2007.&lt;br /&gt;
&lt;br /&gt;
[6] C. M¨onch, G. Grimen, and R. Midtstraum. Protecting online&lt;br /&gt;
games against cheating. In Proceedings of the Workshop on Network&lt;br /&gt;
and Systems Support for Games (NetGames), Oct. 2006.&lt;br /&gt;
&lt;br /&gt;
[7] A. Haeberlen, P. Kuznetsov, and P. Druschel. PeerReview: Practical&lt;br /&gt;
accountability for distributed systems. In Proceedings of&lt;br /&gt;
the ACM Symposium on Operating Systems Principles (SOSP),Oct. 2007.&lt;br /&gt;
&lt;br /&gt;
[8] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
 We can use this area to discuss or leave notes on general ideas or whatever you want to write here.&lt;br /&gt;
&lt;br /&gt;
-The current due date posted on the site for this essay is November 25th  --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-I think that since we are given the headings to this article, we can easily choose what parts each member would like to work on, obviously since there are more members than parts, multiple members will have to work on the same parts or can work on all parts, I guess it&#039;s really up to you. I know that most people have a lot of projects coming up so let&#039;s try to get this done asap, or at least bit by bit so it&#039;s not something we have to worry too much about. --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I would like to do the Contribution or Critique. -- [[User:Sschnei1|Sschnei1]] 02:40, 20 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I can either work on Background Concepts, or Research problem. -[[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I&#039;m not sure whether the background concepts should be in point form or a paragraph, and whether it needs to be very long or not, but I shall work on both background concepts and research problem with you Jbaubin. --[[User:Mchou2|Mchou2]] 18:11, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-Sounds good, and As i was going to post what I had for research problem, I just saw you posted a big chunk of it. I&#039;ll be out for a while, but tonight I&#039;ll take a serious look at what you write and add what I had written. - [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
- Sorry I didn&#039;t write anything yet to Critique. I&#039;m making my notes and will post something tonight or tomorrow. -- [[User:Sschnei1|Sschnei1]] 14:50, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I have started work on the contribution section. I&#039;ll have something up today or tomorrow. --[[User:Hirving|Hirving]] 19:55, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-if anyone has information that they are working on they can just post it up and at least others can look at it and maybe build up stuff on it, and I&#039;m sure everyone is aware of the extension that we got also, but let&#039;s try to finish this in the next few days --[[User:Mchou2|Mchou2]] 20:43, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I agree with finishing it in the next few days. Then we have more time to focus on other courses like 3004. I will post something later that night. -- [[User:Sschnei1|Sschnei1]] 21:29, 23 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5482</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5482"/>
		<updated>2010-11-24T00:34:40Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Critique */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Group Essay 2 =&lt;br /&gt;
&lt;br /&gt;
Hello Group. Please post your information here. I assume everybody read the email at your connect account. Anyone specific wants to send him the email with the group members inside? If not, I just go ahead tomorrow at about 13:00 and send the email with the group members who wrote their contact information in here. - [[User:Sschnei1|Sschnei1]] 03:25, 15 November 2010 (UTC)&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider sschnei1@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Matthew Chou mchou2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Mark Walts mwalts@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Henry Irving hirving@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Jean-Benoit Aubin jbaubin@connect.carleton.ca &lt;br /&gt;
&lt;br /&gt;
Pradhan Nishant npradhan npradhan@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Only Paul Cox didn&#039;t answer i sent this morning. &lt;br /&gt;
&lt;br /&gt;
Cox     Paul    pcox&lt;br /&gt;
&lt;br /&gt;
And I just sent an email to the teacher. &lt;br /&gt;
&lt;br /&gt;
--Jean-Benoit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Paper==&lt;br /&gt;
&lt;br /&gt;
 the paper&#039;s title, authors, and their affiliations. Include a link to the paper and any particularly helpful supplementary information.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Title:&#039;&#039;&#039; Accountable Virtual Machines&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039; Andreas Haeberlen, Paarijaat Aditya, Rodrigo Rodrigues, Peter Druschel&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Affiliates:&#039;&#039;&#039;&lt;br /&gt;
University of Pennsylvania, Max Planck Institute for Software Systems (MPI-SWS)]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Link to Paper:&#039;&#039;&#039; [http://www.usenix.org/events/osdi10/tech/full_papers/Haeberlen.pdf Accountable Virtual Machines]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Supplementary Information:&#039;&#039;&#039; [http://research.microsoft.com/en-us/people/sriram/druschel.pptx Accountable distributed systems and the accountable cloud] - background of similar AVM implementation for distributed systems.&lt;br /&gt;
&lt;br /&gt;
==Background Concepts==&lt;br /&gt;
&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountable Virtual Machine (AVM)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Deterministic Replay&#039;&#039;&#039;: A machine can record its executions into a file so that it can be replayed in order to see the executions and follow what was happening on the machine. Remus [[#References | [1]]] has contributed a highly efficient snap-shotting mechanism for these replays.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountability:&#039;&#039;&#039; Accountability in the context of this paper means that every action done on the virtual machine is recorded and will be used against the machine or user to verify the correctness of the application. The AVM is responsible of its action and will answers for its action against an auditor. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Remote Fault Detection:&#039;&#039;&#039; There are programs like GridCop[[#References | [2]]] that can be used to monitor the progress  and execution of a remotely executing program by requesting a beacon packet. When the remote computer is sending the packets, the receiving/logging computer must be a trusted computer (hardware,software, OS) so that the receiving of packets remains consistent. To detect a fault in a remote system, every packet must arrive safely, and any interrupts during the logging must be handled or the inconsistencies will result in an inaccurate outcome. The AVM does not require trusted hardware and can be used over wide-area networks.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cheat Detection:&#039;&#039;&#039; Cheating in games or any specific modification in a program can be either scanned[[#References | [3][4]]] for or prevented[[#References | [5][6]]] by certain programs. The issue with these scanning and preventative software is the knowledge/awareness of specific cheats or situations that the software can handle. An AVM is designed to counter any kind of general cheat.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Integrity Violations:&#039;&#039;&#039; This refers how the consistency of normal/expected operations of an execution does not equal to that of the host/reference (Trusted) execution, hence a violation has occurred.&lt;br /&gt;
&lt;br /&gt;
- The word &amp;quot;node&amp;quot; is used to refer to a computer or server in order to represent the interactions between one computer and another, or a computer and a server.&lt;br /&gt;
&lt;br /&gt;
==Research problem== &lt;br /&gt;
&lt;br /&gt;
 What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
**Possible alternative  for the first part : &lt;br /&gt;
&lt;br /&gt;
The research presented in this paper tries to tackle a problem that has haunted computer scientists for a long time. How can you be sure that the software running on a remote machine is working correctly or as intended. Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a trust relation between users and a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done would be independent of the node and only dependent on the intended software. Let&#039;s say, that node A interacts with node B with execution exe1 and node A interacts with node C also with ex1, but node C has been modified and respond with exe2. Thus, we can assume that the respond of B and C will be different. Being able to prove that the node C has been modified without any doubt is the purpose of this paper. &lt;br /&gt;
***Let me know what you think about it. I removed the redundant part, and I think made it clearer and more concise. [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
** looks good to me, we&#039;ll put this part into the final essay instead of mine below --[[User:Mchou2|Mchou2]] 20:03, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
/// omit&lt;br /&gt;
&lt;br /&gt;
Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a system of trust between users and a host. These different examples must have a certain amount of trust between the interactions of one user and another, as well as the user interacting with a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done with node A is the same it would be done with another node, node B. Let&#039;s say for example that node A interacts with node B with execution exe1, now when node A and B interact with node C, they would both expect to interact with execution exe1, but what happens if node C interacts differently and executes with exe2, then it would be beneficial to be notified of this difference. The previous explanation might not seem too relevant without some examples, such as; Node A is playing a game with node B, the game executed on node B is the same as on A, now when node A plays with node C, node C is executing the same operations as node A plus a cheating program; when node A buys some products from node B&#039;s server, the server processes the order and then deletes node A&#039;s sensitive information, denoted by execution 1, now when node A buys from node C&#039;s server, the order is processed as well as the sensitive information that node A has provided is also rerouted to another server so that it can be used without permission. These are only a few examples where the operations in an execution is necessary to be logged and verified. The problem that is trying to be handled here is to create a procedure that can be done so that a node can be known as accountable, and to log the operations in an execution to provide evidence of these faults done by a node. &lt;br /&gt;
&lt;br /&gt;
////&lt;br /&gt;
&lt;br /&gt;
Previous work that has been done in efforts to prevent or detect integrity violations can be separated into different categories of operations. The first would be Cheat Detection, where in many different games there are cheats that users use to usually create benefits for themselves that was not intended by the original game.[[#References |[4]]] These detectors are not dynamic, in the sense that they do not actually detect whether a cheat is being used, more so they are checking if there is a cheating operation that they have logged before, being operated on the user&#039;s system. For example, if there was a known cheating program named aimbot.exe that can be run in the background of a game such as CounterStrike, and the PunkBuster system that was implemented on the user&#039;s system had the aimbot.exe program already logged as a cheating program from the developers, the PunkBuster program might notify the current game servers of this or even prevent the user from playing any games until the aimbot.exe operation is no longer running. &lt;br /&gt;
&lt;br /&gt;
Accountability is another important problem that many have already worked on. The main goal of an accountable system is to be able to determine without a doubt that node is faulty and can prove it with solid evidence. It can also be used to defend a node when threatened with false accusation. Numerous systems already use accountability in their system, but they were mostly all linked to specific applications, where a point of reference must be used to compare. As example PeerReview[[#References |[7]]], which is a system closely related to what the research team have worked on,   must be implemented into the application which makes it less portable and cannot be implemented as easily as an AVM. PeerReview verifies the inbound and outbound packets and can see if the software is running as intended. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another problem that is related to the paper is remote fault detection in a distributed system. How can we determine if a remote node is running the code correctly or if the machine itself is working as intended. Network activity is a common solution to this problem, as they look at the inbound and outbound of the node. This can let them know how the software is operating, or in the case of AVM how the whole virtual machine is working. Gridcop[[#References |[8]]] is another example that inspects a small number of packets periodically.  Another way of determining the fault remotely is to use a trusted node,  where it can tell immediately if a fault occurs or a modification is made where it should not have been made. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-and anything else you would to add or modify, or leave a note in the discussion sections if you want me to relook or change something. --[[User:Mchou2|Mchou2]] 20:10, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The problem of logging and auditing the processes of an execution of a specific node (computer) is greatly dependent on the work done for deterministic replay. Deterministic replay programs can create a log file that can be used to replay the operations done for some execution that occurs on a node. Replaying the operations done on the node can show what the node was doing, and this would seem like it is sufficient in finding out whether a node was causing integrity violations or not. The concept of snap-shoting/recording the operations is not the issue with deterministic replay, it is the fact that the data being outputted into the replay may be tampered with by the node itself so that it generates optimal results in replay. By faking the results of the operations, the auditing computer will falsely believe that the tested computer is running all operations as normal. The logging operations done by these recording programs can be directly related to the work needed to detect integrity violations.&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
&lt;br /&gt;
 What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// first part of my writing&lt;br /&gt;
&lt;br /&gt;
For the comprehension of the reader, it is important of a paper/article/essay to have a good overview/layout. The introduction clearly describes what the reader has to expect in the following pages, especially what problems are addressed and how they are solved. &lt;br /&gt;
This paper gives multiple examples about advantages and disadvantages in an AVM. A good example is &amp;quot;Cheat Detection&amp;quot;. Cheaters use programs to go around the original game code to gain an major advantage over other players. Since an AVM is generic in cheat detection it has a wider support for detecting cheats than most of the other cheat detection algorithms. The logs give the game the function to replay the game. Thus, players using AVM can see the way other players play by replaying the game with the player&#039;s log.&lt;br /&gt;
&lt;br /&gt;
The negative side is that the player might have to suffer from the AVM. Everything is being logged and stored on the hard drive, which takes a lot amount of space. In the example in the paper it is 148mb per hour after compression. This reduces the fps. Additionally, the connection to the AVM increases the ping time to the server.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
 You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
[1] B. Cully, G. Lefebvre, D. Meyer, M. Feeley, N. Hutchinson, and&lt;br /&gt;
A. Warfield. Remus: High availability via asynchronous virtual&lt;br /&gt;
machine replication. In Proceedings of the USENIX Symposium&lt;br /&gt;
on Networked Systems Design and Implementation (NSDI), Apr.&lt;br /&gt;
2008.&lt;br /&gt;
&lt;br /&gt;
[2] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
[3] G. Hoglund. 4.5 million copies of EULA-compliant spyware.&lt;br /&gt;
http://www.rootkit.com/blog.php?newsid=358.&lt;br /&gt;
&lt;br /&gt;
[4] PunkBuster web site. http://www.evenbalance.com/.&lt;br /&gt;
&lt;br /&gt;
[5] N. E. Baughman, M. Liberatore, and B. N. Levine. Cheat-proof&lt;br /&gt;
playout for centralized and peer-to-peer gaming. IEEE/ACM&lt;br /&gt;
Transactions on Networking (ToN), 15(1):1–13, Feb. 2007.&lt;br /&gt;
&lt;br /&gt;
[6] C. M¨onch, G. Grimen, and R. Midtstraum. Protecting online&lt;br /&gt;
games against cheating. In Proceedings of the Workshop on Network&lt;br /&gt;
and Systems Support for Games (NetGames), Oct. 2006.&lt;br /&gt;
&lt;br /&gt;
[7] A. Haeberlen, P. Kuznetsov, and P. Druschel. PeerReview: Practical&lt;br /&gt;
accountability for distributed systems. In Proceedings of&lt;br /&gt;
the ACM Symposium on Operating Systems Principles (SOSP),Oct. 2007.&lt;br /&gt;
&lt;br /&gt;
[8] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
 We can use this area to discuss or leave notes on general ideas or whatever you want to write here.&lt;br /&gt;
&lt;br /&gt;
-The current due date posted on the site for this essay is November 25th  --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-I think that since we are given the headings to this article, we can easily choose what parts each member would like to work on, obviously since there are more members than parts, multiple members will have to work on the same parts or can work on all parts, I guess it&#039;s really up to you. I know that most people have a lot of projects coming up so let&#039;s try to get this done asap, or at least bit by bit so it&#039;s not something we have to worry too much about. --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I would like to do the Contribution or Critique. -- [[User:Sschnei1|Sschnei1]] 02:40, 20 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I can either work on Background Concepts, or Research problem. -[[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I&#039;m not sure whether the background concepts should be in point form or a paragraph, and whether it needs to be very long or not, but I shall work on both background concepts and research problem with you Jbaubin. --[[User:Mchou2|Mchou2]] 18:11, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-Sounds good, and As i was going to post what I had for research problem, I just saw you posted a big chunk of it. I&#039;ll be out for a while, but tonight I&#039;ll take a serious look at what you write and add what I had written. - [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
- Sorry I didn&#039;t write anything yet to Critique. I&#039;m making my notes and will post something tonight or tomorrow. -- [[User:Sschnei1|Sschnei1]] 14:50, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I have started work on the contribution section. I&#039;ll have something up today or tomorrow. --[[User:Hirving|Hirving]] 19:55, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-if anyone has information that they are working on they can just post it up and at least others can look at it and maybe build up stuff on it, and I&#039;m sure everyone is aware of the extension that we got also, but let&#039;s try to finish this in the next few days --[[User:Mchou2|Mchou2]] 20:43, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I agree with finishing it in the next few days. Then we have more time to focus on other courses like 3004. I will post something later that night. -- [[User:Sschnei1|Sschnei1]] 21:29, 23 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5479</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5479"/>
		<updated>2010-11-23T21:29:24Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Group Essay 2 =&lt;br /&gt;
&lt;br /&gt;
Hello Group. Please post your information here. I assume everybody read the email at your connect account. Anyone specific wants to send him the email with the group members inside? If not, I just go ahead tomorrow at about 13:00 and send the email with the group members who wrote their contact information in here. - [[User:Sschnei1|Sschnei1]] 03:25, 15 November 2010 (UTC)&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider sschnei1@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Matthew Chou mchou2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Mark Walts mwalts@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Henry Irving hirving@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Jean-Benoit Aubin jbaubin@connect.carleton.ca &lt;br /&gt;
&lt;br /&gt;
Pradhan Nishant npradhan npradhan@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Only Paul Cox didn&#039;t answer i sent this morning. &lt;br /&gt;
&lt;br /&gt;
Cox     Paul    pcox&lt;br /&gt;
&lt;br /&gt;
And I just sent an email to the teacher. &lt;br /&gt;
&lt;br /&gt;
--Jean-Benoit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Paper==&lt;br /&gt;
&lt;br /&gt;
 the paper&#039;s title, authors, and their affiliations. Include a link to the paper and any particularly helpful supplementary information.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Title:&#039;&#039;&#039; Accountable Virtual Machines&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039; Andreas Haeberlen, Paarijaat Aditya, Rodrigo Rodrigues, Peter Druschel&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Affiliates:&#039;&#039;&#039;&lt;br /&gt;
University of Pennsylvania, Max Planck Institute for Software Systems (MPI-SWS)]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Link to Paper:&#039;&#039;&#039; [http://www.usenix.org/events/osdi10/tech/full_papers/Haeberlen.pdf Accountable Virtual Machines]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Supplementary Information:&#039;&#039;&#039; [http://research.microsoft.com/en-us/people/sriram/druschel.pptx Accountable distributed systems and the accountable cloud] - background of similar AVM implementation for distributed systems.&lt;br /&gt;
&lt;br /&gt;
==Background Concepts==&lt;br /&gt;
&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountable Virtual Machine (AVM)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Deterministic Replay&#039;&#039;&#039;: A machine can record its executions into a file so that it can be replayed in order to see the executions and follow what was happening on the machine. Remus [[#References | [1]]] has contributed a highly efficient snap-shotting mechanism for these replays.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountability:&#039;&#039;&#039; Accountability in the context of this paper means that every action done on the virtual machine is recorded and will be used against the machine or user to verify the correctness of the application. The AVM is responsible of its action and will answers for its action against an auditor. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Remote Fault Detection:&#039;&#039;&#039; There are programs like GridCop[[#References | [2]]] that can be used to monitor the progress  and execution of a remotely executing program by requesting a beacon packet. When the remote computer is sending the packets, the receiving/logging computer must be a trusted computer (hardware,software, OS) so that the receiving of packets remains consistent. To detect a fault in a remote system, every packet must arrive safely, and any interrupts during the logging must be handled or the inconsistencies will result in an inaccurate outcome. The AVM does not require trusted hardware and can be used over wide-area networks.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cheat Detection:&#039;&#039;&#039; Cheating in games or any specific modification in a program can be either scanned[[#References | [3][4]]] for or prevented[[#References | [5][6]]] by certain programs. The issue with these scanning and preventative software is the knowledge/awareness of specific cheats or situations that the software can handle. An AVM is designed to counter any kind of general cheat.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Integrity Violations:&#039;&#039;&#039; This refers how the consistency of normal/expected operations of an execution does not equal to that of the host/reference (Trusted) execution, hence a violation has occurred.&lt;br /&gt;
&lt;br /&gt;
- The word &amp;quot;node&amp;quot; is used to refer to a computer or server in order to represent the interactions between one computer and another, or a computer and a server.&lt;br /&gt;
&lt;br /&gt;
==Research problem== &lt;br /&gt;
&lt;br /&gt;
 What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
**Possible alternative  for the first part : &lt;br /&gt;
&lt;br /&gt;
The research presented in this paper tries to tackle a problem that has haunted computer scientists for a long time. How can you be sure that the software running on a remote machine is working correctly or as intended. Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a trust relation between users and a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done would be independent of the node and only dependent on the intended software. Let&#039;s say, that node A interacts with node B with execution exe1 and node A interacts with node C also with ex1, but node C has been modified and respond with exe2. Thus, we can assume that the respond of B and C will be different. Being able to prove that the node C has been modified without any doubt is the purpose of this paper. &lt;br /&gt;
***Let me know what you think about it. I removed the redundant part, and I think made it clearer and more concise. [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
** looks good to me, we&#039;ll put this part into the final essay instead of mine below --[[User:Mchou2|Mchou2]] 20:03, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
/// omit&lt;br /&gt;
&lt;br /&gt;
Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a system of trust between users and a host. These different examples must have a certain amount of trust between the interactions of one user and another, as well as the user interacting with a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done with node A is the same it would be done with another node, node B. Let&#039;s say for example that node A interacts with node B with execution exe1, now when node A and B interact with node C, they would both expect to interact with execution exe1, but what happens if node C interacts differently and executes with exe2, then it would be beneficial to be notified of this difference. The previous explanation might not seem too relevant without some examples, such as; Node A is playing a game with node B, the game executed on node B is the same as on A, now when node A plays with node C, node C is executing the same operations as node A plus a cheating program; when node A buys some products from node B&#039;s server, the server processes the order and then deletes node A&#039;s sensitive information, denoted by execution 1, now when node A buys from node C&#039;s server, the order is processed as well as the sensitive information that node A has provided is also rerouted to another server so that it can be used without permission. These are only a few examples where the operations in an execution is necessary to be logged and verified. The problem that is trying to be handled here is to create a procedure that can be done so that a node can be known as accountable, and to log the operations in an execution to provide evidence of these faults done by a node. &lt;br /&gt;
&lt;br /&gt;
////&lt;br /&gt;
&lt;br /&gt;
Previous work that has been done in efforts to prevent or detect integrity violations can be separated into different categories of operations. The first would be Cheat Detection, where in many different games there are cheats that users use to usually create benefits for themselves that was not intended by the original game.[[#References |[4]]] These detectors are not dynamic, in the sense that they do not actually detect whether a cheat is being used, more so they are checking if there is a cheating operation that they have logged before, being operated on the user&#039;s system. For example, if there was a known cheating program named aimbot.exe that can be run in the background of a game such as CounterStrike, and the PunkBuster system that was implemented on the user&#039;s system had the aimbot.exe program already logged as a cheating program from the developers, the PunkBuster program might notify the current game servers of this or even prevent the user from playing any games until the aimbot.exe operation is no longer running. &lt;br /&gt;
&lt;br /&gt;
Accountability is another important problem that many have already worked on. The main goal of an accountable system is to be able to determine without a doubt that node is faulty and can prove it with solid evidence. It can also be used to defend a node when threatened with false accusation. Numerous systems already use accountability in their system, but they were mostly all linked to specific applications, where a point of reference must be used to compare. As example PeerReview[[#References |[7]]], which is a system closely related to what the research team have worked on,   must be implemented into the application which makes it less portable and cannot be implemented as easily as an AVM. PeerReview verifies the inbound and outbound packets and can see if the software is running as intended. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another problem that is related to the paper is remote fault detection in a distributed system. How can we determine if a remote node is running the code correctly or if the machine itself is working as intended. Network activity is a common solution to this problem, as they look at the inbound and outbound of the node. This can let them know how the software is operating, or in the case of AVM how the whole virtual machine is working. Gridcop[[#References |[8]]] is another example that inspects a small number of packets periodically.  Another way of determining the fault remotely is to use a trusted node,  where it can tell immediately if a fault occurs or a modification is made where it should not have been made. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-and anything else you would to add or modify, or leave a note in the discussion sections if you want me to relook or change something. --[[User:Mchou2|Mchou2]] 20:10, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The problem of logging and auditing the processes of an execution of a specific node (computer) is greatly dependent on the work done for deterministic replay. Deterministic replay programs can create a log file that can be used to replay the operations done for some execution that occurs on a node. Replaying the operations done on the node can show what the node was doing, and this would seem like it is sufficient in finding out whether a node was causing integrity violations or not. The concept of snap-shoting/recording the operations is not the issue with deterministic replay, it is the fact that the data being outputted into the replay may be tampered with by the node itself so that it generates optimal results in replay. By faking the results of the operations, the auditing computer will falsely believe that the tested computer is running all operations as normal. The logging operations done by these recording programs can be directly related to the work needed to detect integrity violations.&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
&lt;br /&gt;
 What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
 You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
[1] B. Cully, G. Lefebvre, D. Meyer, M. Feeley, N. Hutchinson, and&lt;br /&gt;
A. Warfield. Remus: High availability via asynchronous virtual&lt;br /&gt;
machine replication. In Proceedings of the USENIX Symposium&lt;br /&gt;
on Networked Systems Design and Implementation (NSDI), Apr.&lt;br /&gt;
2008.&lt;br /&gt;
&lt;br /&gt;
[2] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
[3] G. Hoglund. 4.5 million copies of EULA-compliant spyware.&lt;br /&gt;
http://www.rootkit.com/blog.php?newsid=358.&lt;br /&gt;
&lt;br /&gt;
[4] PunkBuster web site. http://www.evenbalance.com/.&lt;br /&gt;
&lt;br /&gt;
[5] N. E. Baughman, M. Liberatore, and B. N. Levine. Cheat-proof&lt;br /&gt;
playout for centralized and peer-to-peer gaming. IEEE/ACM&lt;br /&gt;
Transactions on Networking (ToN), 15(1):1–13, Feb. 2007.&lt;br /&gt;
&lt;br /&gt;
[6] C. M¨onch, G. Grimen, and R. Midtstraum. Protecting online&lt;br /&gt;
games against cheating. In Proceedings of the Workshop on Network&lt;br /&gt;
and Systems Support for Games (NetGames), Oct. 2006.&lt;br /&gt;
&lt;br /&gt;
[7] A. Haeberlen, P. Kuznetsov, and P. Druschel. PeerReview: Practical&lt;br /&gt;
accountability for distributed systems. In Proceedings of&lt;br /&gt;
the ACM Symposium on Operating Systems Principles (SOSP),Oct. 2007.&lt;br /&gt;
&lt;br /&gt;
[8] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
 We can use this area to discuss or leave notes on general ideas or whatever you want to write here.&lt;br /&gt;
&lt;br /&gt;
-The current due date posted on the site for this essay is November 25th  --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-I think that since we are given the headings to this article, we can easily choose what parts each member would like to work on, obviously since there are more members than parts, multiple members will have to work on the same parts or can work on all parts, I guess it&#039;s really up to you. I know that most people have a lot of projects coming up so let&#039;s try to get this done asap, or at least bit by bit so it&#039;s not something we have to worry too much about. --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I would like to do the Contribution or Critique. -- [[User:Sschnei1|Sschnei1]] 02:40, 20 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I can either work on Background Concepts, or Research problem. -[[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I&#039;m not sure whether the background concepts should be in point form or a paragraph, and whether it needs to be very long or not, but I shall work on both background concepts and research problem with you Jbaubin. --[[User:Mchou2|Mchou2]] 18:11, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-Sounds good, and As i was going to post what I had for research problem, I just saw you posted a big chunk of it. I&#039;ll be out for a while, but tonight I&#039;ll take a serious look at what you write and add what I had written. - [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
- Sorry I didn&#039;t write anything yet to Critique. I&#039;m making my notes and will post something tonight or tomorrow. -- [[User:Sschnei1|Sschnei1]] 14:50, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I have started work on the contribution section. I&#039;ll have something up today or tomorrow. --[[User:Hirving|Hirving]] 19:55, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-if anyone has information that they are working on they can just post it up and at least others can look at it and maybe build up stuff on it, and I&#039;m sure everyone is aware of the extension that we got also, but let&#039;s try to finish this in the next few days --[[User:Mchou2|Mchou2]] 20:43, 23 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
- I agree with finishing it in the next few days. Then we have more time to focus on other courses like 3004. I will post something later that night. -- [[User:Sschnei1|Sschnei1]] 21:29, 23 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5333</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5333"/>
		<updated>2010-11-22T14:50:45Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Group Essay 2 =&lt;br /&gt;
&lt;br /&gt;
Hello Group. Please post your information here. I assume everybody read the email at your connect account. Anyone specific wants to send him the email with the group members inside? If not, I just go ahead tomorrow at about 13:00 and send the email with the group members who wrote their contact information in here. - [[User:Sschnei1|Sschnei1]] 03:25, 15 November 2010 (UTC)&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider sschnei1@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Matthew Chou mchou2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Mark Walts mwalts@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Henry Irving hirving@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Jean-Benoit Aubin jbaubin@connect.carleton.ca &lt;br /&gt;
&lt;br /&gt;
Pradhan Nishant npradhan npradhan@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Only Paul Cox didn&#039;t answer i sent this morning. &lt;br /&gt;
&lt;br /&gt;
Cox     Paul    pcox&lt;br /&gt;
&lt;br /&gt;
And I just sent an email to the teacher. &lt;br /&gt;
&lt;br /&gt;
--Jean-Benoit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Paper==&lt;br /&gt;
&lt;br /&gt;
 the paper&#039;s title, authors, and their affiliations. Include a link to the paper and any particularly helpful supplementary information.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Title:&#039;&#039;&#039; Accountable Virtual Machines&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039; Andreas Haeberlen, Paarijaat Aditya, Rodrigo Rodrigues, Peter Druschel&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Affiliates:&#039;&#039;&#039;&lt;br /&gt;
University of Pennsylvania, Max Planck Institute for Software Systems (MPI-SWS)]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Link to Paper:&#039;&#039;&#039; [http://www.usenix.org/events/osdi10/tech/full_papers/Haeberlen.pdf Accountable Virtual Machines]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Supplementary Information:&#039;&#039;&#039; [http://research.microsoft.com/en-us/people/sriram/druschel.pptx Accountable distributed systems and the accountable cloud] - background of similar AVM implementation for distributed systems.&lt;br /&gt;
&lt;br /&gt;
==Background Concepts==&lt;br /&gt;
&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountable Virtual Machine (AVM)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Deterministic Replay&#039;&#039;&#039;: A machine can record its executions into a file so that it can be replayed in order to see the executions and follow what was happening on the machine. Remus [[#References | [1]]] has contributed a highly efficient snap-shotting mechanism for these replays.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountability:&#039;&#039;&#039; Accountability in the context of this paper means that every action done on the virtual machine is recorded and will be used against the machine or user to verify the correctness of the application. The AVM is responsible of its action and will answers for its action against an auditor. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Remote Fault Detection:&#039;&#039;&#039; There are programs like GridCop[[#References | [2]]] that can be used to monitor the progress  and execution of a remotely executing program by requesting a beacon packet. When the remote computer is sending the packets, the receiving/logging computer must be a trusted computer (hardware,software, OS) so that the receiving of packets remains consistent. To detect a fault in a remote system, every packet must arrive safely, and any interrupts during the logging must be handled or the inconsistencies will result in an inaccurate outcome. The AVM does not require trusted hardware and can be used over wide-area networks.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cheat Detection:&#039;&#039;&#039; Cheating in games or any specific modification in a program can be either scanned[[#References | [3][4]]] for or prevented[[#References | [5][6]]] by certain programs. The issue with these scanning and preventative software is the knowledge/awareness of specific cheats or situations that the software can handle. An AVM is designed to counter any kind of general cheat.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Integrity Violations:&#039;&#039;&#039; This refers how the consistency of normal/expected operations of an execution does not equal to that of the host/reference (Trusted) execution, hence a violation has occurred.&lt;br /&gt;
&lt;br /&gt;
- The word &amp;quot;node&amp;quot; is used to refer to a computer or server in order to represent the interactions between one computer and another, or a computer and a server.&lt;br /&gt;
&lt;br /&gt;
==Research problem== &lt;br /&gt;
&lt;br /&gt;
 What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Cloud computing, online multi-player games, and other online services such as auctions are only a few examples that rely on a system of trust between users and a host. These different examples must have a certain amount of trust between the interactions of one user and another, as well as the user interacting with a host. When a node (user or computer) expects some sort of result or feedback from another node, they would hope that that interaction being done with node A is the same it would be done with another node, node B. Let&#039;s say for example that node A interacts with node B with execution exe1, now when node A and B interact with node C, they would both expect to interact with execution exe1, but what happens if node C interacts differently and executes with exe2, then it would be beneficial to be notified of this difference. The previous explanation might not seem too relevant without some examples, such as; Node A is playing a game with node B, the game executed on node B is the same as on A, now when node A plays with node C, node C is executing the same operations as node A plus a cheating program; when node A buys some products from node B&#039;s server, the server processes the order and then deletes node A&#039;s sensitive information, denoted by execution 1, now when node A buys from node C&#039;s server, the order is processed as well as the sensitive information that node A has provided is also rerouted to another server so that it can be used without permission. These are only a few examples where the operations in an execution is necessary to be logged and verified. The problem that is trying to be handled here is to create a procedure that can be done so that a node can be known as accountable, and to log the operations in an execution to provide evidence of these faults done by a node. Previous work that has been done in efforts to prevent or detect integrity violations can be separated into different categories of operations. The first would be Cheat Detection, where in many different games there are cheats that users use to usually create benefits for themselves that was not intended by the original game.[[#References |[4]]] These detectors are not dynamic, in the sense that they do not actually detect whether a cheat is being used, more so they are checking if there is a cheating operation that they have logged before, being operated on the user&#039;s system. For example, if there was a known cheating program named aimbot.exe that can be run in the background of a game such as CounterStrike, and the PunkBuster system that was implemented on the user&#039;s system had the aimbot.exe program already logged as a cheating program from the developers, the PunkBuster program might notify the current game servers of this or even prevent the user from playing any games until the aimbot.exe operation is no longer running. &lt;br /&gt;
&lt;br /&gt;
-If you could please fill in this section about Remote Fault detection it would awesome, and anything else you would to add or modify, or leave a note in the discussion sections if you want me to relook or change something. --[[User:Mchou2|Mchou2]] 20:10, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The problem of logging and auditing the processes of an execution of a specific node (computer) is greatly dependent on the work done for deterministic replay. Deterministic replay programs can create a log file that can be used to replay the operations done for some execution that occurs on a node. Replaying the operations done on the node can show what the node was doing, and this would seem like it is sufficient in finding out whether a node was causing integrity violations or not. The concept of snap-shoting/recording the operations is not the issue with deterministic replay, it is the fact that the data being outputted into the replay may be tampered with by the node itself so that it generates optimal results in replay. By faking the results of the operations, the auditing computer will falsely believe that the tested computer is running all operations as normal. The logging operations done by these recording programs can be directly related to the work needed to detect integrity violations.&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
&lt;br /&gt;
 What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
 You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
[1] B. Cully, G. Lefebvre, D. Meyer, M. Feeley, N. Hutchinson, and&lt;br /&gt;
A. Warfield. Remus: High availability via asynchronous virtual&lt;br /&gt;
machine replication. In Proceedings of the USENIX Symposium&lt;br /&gt;
on Networked Systems Design and Implementation (NSDI), Apr.&lt;br /&gt;
2008.&lt;br /&gt;
&lt;br /&gt;
[2] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
[3] G. Hoglund. 4.5 million copies of EULA-compliant spyware.&lt;br /&gt;
http://www.rootkit.com/blog.php?newsid=358.&lt;br /&gt;
&lt;br /&gt;
[4] PunkBuster web site. http://www.evenbalance.com/.&lt;br /&gt;
&lt;br /&gt;
[5] N. E. Baughman, M. Liberatore, and B. N. Levine. Cheat-proof&lt;br /&gt;
playout for centralized and peer-to-peer gaming. IEEE/ACM&lt;br /&gt;
Transactions on Networking (ToN), 15(1):1–13, Feb. 2007.&lt;br /&gt;
&lt;br /&gt;
[6] C. M¨onch, G. Grimen, and R. Midtstraum. Protecting online&lt;br /&gt;
games against cheating. In Proceedings of the Workshop on Network&lt;br /&gt;
and Systems Support for Games (NetGames), Oct. 2006.&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
 We can use this area to discuss or leave notes on general ideas or whatever you want to write here.&lt;br /&gt;
&lt;br /&gt;
-The current due date posted on the site for this essay is November 25th  --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-I think that since we are given the headings to this article, we can easily choose what parts each member would like to work on, obviously since there are more members than parts, multiple members will have to work on the same parts or can work on all parts, I guess it&#039;s really up to you. I know that most people have a lot of projects coming up so let&#039;s try to get this done asap, or at least bit by bit so it&#039;s not something we have to worry too much about. --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I would like to do the Contribution or Critique. -- [[User:Sschnei1|Sschnei1]] 02:40, 20 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I can either work on Background Concepts, or Research problem. -[[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I&#039;m not sure whether the background concepts should be in point form or a paragraph, and whether it needs to be very long or not, but I shall work on both background concepts and research problem with you Jbaubin. --[[User:Mchou2|Mchou2]] 18:11, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-Sounds good, and As i was going to post what I had for research problem, I just saw you posted a big chunk of it. I&#039;ll be out for a while, but tonight I&#039;ll take a serious look at what you write and add what I had written. - [[User:Jbaubin|Jbaubin]]&lt;br /&gt;
&lt;br /&gt;
- Sorry I didn&#039;t write anything yet to Critique. I&#039;m making my notes and will post something tonight or tomorrow. -- [[User:Sschnei1|Sschnei1]] 14:50, 22 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5210</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=5210"/>
		<updated>2010-11-20T02:40:24Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Group Essay 2 =&lt;br /&gt;
&lt;br /&gt;
Hello Group. Please post your information here. I assume everybody read the email at your connect account. Anyone specific wants to send him the email with the group members inside? If not, I just go ahead tomorrow at about 13:00 and send the email with the group members who wrote their contact information in here. - [[User:Sschnei1|Sschnei1]] 03:25, 15 November 2010 (UTC)&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider sschnei1@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Matthew Chou mchou2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Mark Walts mwalts@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Henry Irving hirving@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Jean-Benoit Aubin jbaubin@connect.carleton.ca &lt;br /&gt;
&lt;br /&gt;
Pradhan Nishant npradhan npradhan@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Only Paul Cox didn&#039;t answer i sent this morning. &lt;br /&gt;
&lt;br /&gt;
Cox     Paul    pcox&lt;br /&gt;
&lt;br /&gt;
And I just sent an email to the teacher. &lt;br /&gt;
&lt;br /&gt;
--Jean-Benoit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Paper==&lt;br /&gt;
&lt;br /&gt;
 the paper&#039;s title, authors, and their affiliations. Include a link to the paper and any particularly helpful supplementary information.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Title:&#039;&#039;&#039; Accountable Virtual Machines&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039; Andreas Haeberlen, Paarijaat Aditya, Rodrigo Rodrigues, Peter Druschel&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Affiliates:&#039;&#039;&#039;&lt;br /&gt;
University of Pennsylvania, Max Planck Institute for Software Systems (MPI-SWS)]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Link to Paper:&#039;&#039;&#039; [http://www.usenix.org/events/osdi10/tech/full_papers/Haeberlen.pdf Accountable Virtual Machines]&lt;br /&gt;
&lt;br /&gt;
==Background Concepts==&lt;br /&gt;
&lt;br /&gt;
 Explain briefly the background concepts and ideas that your fellow classmates will need to know first in order to understand your assigned paper.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountable Virtual Machine (AVM)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Deterministic Replay&#039;&#039;&#039;: A machine can record its executions into a file so that it can be replayed in order to see the executions and follow what was happening on the machine. Simple replaying is not sufficient for the purposes of finding changes/cheats in a system because the data being outputted into the replay may be tampered with by the system so that it generates optimal results. Remus [[#References | [1]]] has contributed a highly efficient snapshotting mechanism for these replays, and its usage can be directly benefit the AVM.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Accountability:&#039;&#039;&#039;&lt;br /&gt;
-not sure what to write her atm --[[User:Mchou2|Mchou2]] 06:01, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Remote Fault Detection:&#039;&#039;&#039; There are programs like GridCop[[#References | [2]]] that can be used to the progress  and execution of a remotely executing program by receiving packets. When the remote computer is sending the packets, the receiving/logging computer must be a trusted computer (hardware,software, OS) so that the receiving of packets remains consistent. To detect a fault in a remote system, every packet must arrive safely, and any interrupts during the logging must be handled or the inconsistencies will result in an inaccurate outcome. The AVM does not require trusted hardware and can be used over wide-area networks.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Cheat Detection:&#039;&#039;&#039; Cheating in games or any specific modification in a program can be either scanned[[#References | [3][4]]] for or prevented[[#References | [5][6]]] by certain programs. The issue with these scanning and preventative software is the knowledge/awareness of specific cheats or situations that the software can handle. An AVM is designed to counter any kind of general cheat.&lt;br /&gt;
&lt;br /&gt;
==Research problem== &lt;br /&gt;
&lt;br /&gt;
 What is the research problem being addressed by the paper? How does this problem relate to past related work?&lt;br /&gt;
&lt;br /&gt;
==Contribution==&lt;br /&gt;
&lt;br /&gt;
 What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&lt;br /&gt;
 What is good and not-so-good about this paper? You may discuss both the style and content; be sure to ground your discussion with specific references. Simple assertions that something is good or bad is not enough - you must explain why.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
 You will almost certainly have to refer to other resources; please cite these resources in the style of citation of the papers assigned (inlined numbered references). Place your bibliographic entries in this section.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
[1] B. Cully, G. Lefebvre, D. Meyer, M. Feeley, N. Hutchinson, and&lt;br /&gt;
A. Warfield. Remus: High availability via asynchronous virtual&lt;br /&gt;
machine replication. In Proceedings of the USENIX Symposium&lt;br /&gt;
on Networked Systems Design and Implementation (NSDI), Apr.&lt;br /&gt;
2008.&lt;br /&gt;
&lt;br /&gt;
[2] S. Yang, A. R. Butt, Y. C. Hu, and S. P. Midkiff. Trust but&lt;br /&gt;
verify: Monitoring remotely executing programs for progress&lt;br /&gt;
and correctness. In Proceedings of the ACM SIGPLAN Annual&lt;br /&gt;
Symposium on Principles and Practice of Parallel Programming&lt;br /&gt;
(PPoPP), June 2005.&lt;br /&gt;
&lt;br /&gt;
[3] G. Hoglund. 4.5 million copies of EULA-compliant spyware.&lt;br /&gt;
http://www.rootkit.com/blog.php?newsid=358.&lt;br /&gt;
&lt;br /&gt;
[4] PunkBuster web site. http://www.evenbalance.com/.&lt;br /&gt;
&lt;br /&gt;
[5] N. E. Baughman, M. Liberatore, and B. N. Levine. Cheat-proof&lt;br /&gt;
playout for centralized and peer-to-peer gaming. IEEE/ACM&lt;br /&gt;
Transactions on Networking (ToN), 15(1):1–13, Feb. 2007.&lt;br /&gt;
&lt;br /&gt;
[6] C. M¨onch, G. Grimen, and R. Midtstraum. Protecting online&lt;br /&gt;
games against cheating. In Proceedings of the Workshop on Network&lt;br /&gt;
and Systems Support for Games (NetGames), Oct. 2006.&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
 We can use this area to discuss or leave notes on general ideas or whatever you want to write here.&lt;br /&gt;
&lt;br /&gt;
-The current due date posted on the site for this essay is November 25th  --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-I think that since we are given the headings to this article, we can easily choose what parts each member would like to work on, obviously since there are more members than parts, multiple members will have to work on the same parts or can work on all parts, I guess it&#039;s really up to you. I know that most people have a lot of projects coming up so let&#039;s try to get this done asap, or at least bit by bit so it&#039;s not something we have to worry too much about. --[[User:Mchou2|Mchou2]] 05:18, 19 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I would like to do the Contribution or Critique. -- [[User:Sschnei1|Sschnei1]] 02:40, 20 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=4966</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=4966"/>
		<updated>2010-11-15T03:25:26Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Group Essay 2 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Group Essay 2 =&lt;br /&gt;
&lt;br /&gt;
Hello Group. Please post your information here. I assume everybody read the email at your connect account. Anyone specific wants to send him the email with the group members inside? If not, I just go ahead tomorrow at about 13:00 and send the email with the group members who wrote their contact information in here. - [[User:Sschnei1|Sschnei1]] 03:25, 15 November 2010 (UTC)&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider sschnei1@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Matthew Chou mchou2@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Mark Walts mwalts@connect.carleton.ca&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_4&amp;diff=4938</id>
		<title>COMP 3000 Essay 2 2010 Question 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_4&amp;diff=4938"/>
		<updated>2010-11-14T03:15:17Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: Created page with &amp;quot;= Paper =&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Paper =&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=4937</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 4</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_4&amp;diff=4937"/>
		<updated>2010-11-14T01:34:48Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: Created page with &amp;quot;= Group Essay 2 =  Hello Group. Please post your information here:  Sebastian Schneider sschnei1@connect.carleton.ca&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Group Essay 2 =&lt;br /&gt;
&lt;br /&gt;
Hello Group. Please post your information here:&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider sschnei1@connect.carleton.ca&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3476</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3476"/>
		<updated>2010-10-14T00:39:55Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Overview &amp;amp; History */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from BSD which itself is a version of the UNIX scheduler. In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. The BSD scheduler was designed to work on a single core computer system and handle relatively small numbers of processes. As a result, managing resources with a scheduler which operates in O(n) time did not raise any performance issues for BSD. To ensure fairness, the scheduler would switch between processes every 0.1 seconds in a round-robin format [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity, specifically the addition of multiple processors, computer programs increased in size as well. Although the additional complexity increased what could be accomplished with a computer, it also highlighted the problem of having a O(n) scheduler; as more items are added to the scheduling algorithm, performance decreases. With symmetric multiprocessing becoming inevitable a better scheduler was required. This was the driving force behind the creation of FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the origianl FreeBSD scheduler was not built to handle Symmetric Multiprocessing or Symmetric Multithreading on multi-core systems. The sheduler was still limited by a O(n) algorithm which could not efficiently handle the loads required on ever increasingly powerful systems. To allow FreeBSD to operate with more modern computer systems, a new scheduler, the ULE scheduler, was necessary.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
In order to effectively manage multi-core computer systems, FreeBSD needed a scheduler with an algorithm which would execute in constant time regardless of the number of threads involved. The ULE scheduler was designed for this purpose. It is of interest to note that throughout the course of the BSD/FreeBSD scheduler evolution, each iteration has just been an improvement on existing scheduler technologies. Although each version was designed to provide support for some current reality of computing, like multi-core systems, the evolution was out of necessity and not due to a desire to re-evaluate how the current version accomplished its tasks.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;slow&amp;quot; evolution of the FreeBSD scheduler becomes even more evident when comparing it to the Linux scheduler which has evolved through a series of attempts to provide alternative ways to solve scheduling tasks. From dynamic time slices, to various data structure implimentations, and even various ways of describing prioritiy levels (see: &amp;quot;nice&amp;quot; levels), the Linux scheduler advancement has occurred through a series of drastic changes. In comparison, the FreeBSD scheduler has been changed only when the current version was no longer able to meet the needs of the existing computing climate.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:17, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
(Austin - I added a reference to one of your sections as the current reference only went to wikipedia which the prof has kind of implied is not a good idea, I also added another one that was to a blog post as that was another thing the prof mentioned was not the best idea. I am hoping this will provide additional alidations to the sources.)&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 00:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]], modified by [[User:Sschnei1|Sschnei1]] )&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969 [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html](Stallings: 2009). The early versions had relatively inefficient schedulers which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
There are five basic algorithms for allocating CPU time[http://en.wikipedia.org/wiki/Scheduling_(computing)#Scheduling_disciplines][http://joshaas.net/linux/linux_cpu_scheduler.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;First-in, First-out: No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Shortest Time Remaining: Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Fixed-Priority Preemptive Scheduling: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Round-Robin Scheduling: Fair multi-tasking. This method is similar in concept to Fixed-Priority Preemptive Scheduling, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time. The Round-Robin Scheduling is used in Linux-1.2&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Multilevel Queue Scheduling: Rule-based multi-taksing. This method is also similar to Fixed-Priority Preemptive Scheduling, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system. The O(1) algorithm in 2.6 up to 2.6.23 is based on a Multilevel Queue.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3459</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3459"/>
		<updated>2010-10-14T00:24:59Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Essay Preview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Resources=&lt;br /&gt;
&lt;br /&gt;
I just moved the Resources section to our discussion page --[[User:AbsMechanik|AbsMechanik]] 18:19, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I found some resources, which might be useful to answer this question. As far as I know, FreeBSD uses a Multilevel feeback queue and Linux uses in the current version the completly fair scheduler.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Some text about FreeBSD-scheduling http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-ULE Thread Scheduler: http://www.scribd.com/doc/3299978/ULE-Thread-Scheduler-for-FreeBSD&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Completly Fair Scheduler: http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Brain Fuck Scheduler: http://en.wikipedia.org/wiki/Brain_Fuck_Scheduler&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Sebastian&lt;br /&gt;
&lt;br /&gt;
Also found a nice link with regards to the new Linux Scheduler for those interested:&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-scheduler/&lt;br /&gt;
&amp;lt;br /&amp;gt;It is also referred to as the O(1) scheduler in algorithmic terms (CFS is O(log(n)) scheduler). Both have been in development by Ingo Molnár.&lt;br /&gt;
-Abhinav&lt;br /&gt;
&lt;br /&gt;
Some more resources;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html (includes history of Linux scheduler from 1.2 to 2.6)&amp;lt;br /&amp;gt;&lt;br /&gt;
http://my.opera.com/blu3c4t/blog/show.dml/1531517 &amp;lt;br /&amp;gt;&lt;br /&gt;
-Wes&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
Information on changes to the O(1) scheduler:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Linux Kernel Documentation&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
General information on Linux Job Scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Linux Job Scheduling | Linux Journal&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.linuxjournal.com/article/4087&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Scheduling on multi-core Linux machines:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Node affine NUMA scheduler for Linux&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://home.arcor.de/efocht/sched/&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
More on Linux process scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Understanding the Linux kernel&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://oreilly.com/catalog/linuxkernel/chapter/ch10.html&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
FreeBSD thread scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;InformIT: FreeBSD Process Management&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&amp;lt;br /&amp;gt;&lt;br /&gt;
- Austin Bondio&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a part to introduce the several schedulers for LINUX. We might need to change the reference, since I got it all from http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:27, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write down our contact emails and names to write down who would like to write what part.&lt;br /&gt;
&lt;br /&gt;
Another suggestion is that someone should read over the text and compare it to the references posted in the &amp;quot;Sources&amp;quot; section and check if someone is doing plagiarism. &lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider - sebastian@gamersblog.ca&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be &lt;br /&gt;
efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea &lt;br /&gt;
of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was &lt;br /&gt;
the first scheduler which supported SMP.&lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its &lt;br /&gt;
predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a &lt;br /&gt;
scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task &lt;br /&gt;
did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute &lt;br /&gt;
longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and &lt;br /&gt;
did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware &lt;br /&gt;
architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
Linux-2.6 introduced another scheduler up to Linux 2.6.23. Before Linux 2.6.23 an O(1) scheduler was used. It needed the &lt;br /&gt;
same amount of time for each task to execute, independent of how big the tasks were.It kept track of the tasks in a &lt;br /&gt;
running queue. The scheduler offered much more scalability. To determine if a task was I/O bound or processor bound the &lt;br /&gt;
scheduler used interactive metrics with numerous heuristics. Because the code was difficult to manage and the most part of &lt;br /&gt;
the code was to calculate heuristics, it was replaced in Linux 2.6.23 with the CFS scheduler, which is the current &lt;br /&gt;
scheduler in the actual Linux versions.&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining &lt;br /&gt;
fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. &lt;br /&gt;
When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep &lt;br /&gt;
fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual &lt;br /&gt;
runtime.&lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing &lt;br /&gt;
and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. &lt;br /&gt;
Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu &lt;br /&gt;
are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The &lt;br /&gt;
scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted &lt;br /&gt;
into the red-black tree. This means  tasks on the left side are given time to execute, while the contents on the right side &lt;br /&gt;
of the tree are migrated to the left side to maintain fairness. [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:26, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve started writing a bit about the Linux O(1) scheduler:&lt;br /&gt;
&lt;br /&gt;
Under a Linux system, scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19.&lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.[http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt]&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.[http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here&#039;s something I put into the Linux: Overview section:&lt;br /&gt;
&amp;lt;br /&amp;gt;I (Sschnei1) added some text to the Round-Robin Scheduling and the Multilevel Queue Scheduling.&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969.[http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html] The early versions had relatively inefficient schedulers which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
There are five basic algorithms for allocating CPU time[http://en.wikipedia.org/wiki/Scheduling_(computing)#Scheduling_disciplines]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;First-in, First-out: No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Shortest Time Remaining: Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Fixed-Priority Preemptive Scheduling: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Round-Robin Scheduling: Fair multi-tasking. This method is similar in concept to Fixed-Priority Preemptive Scheduling, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time. The Round-Robin Scheduling is used in Linux-1.2.&amp;lt;/li&amp;gt; &lt;br /&gt;
&amp;lt;li&amp;gt;Multilevel Queue Scheduling: Rule-based multi-taksing. This method is also similar to Fixed-Priority Preemptive Scheduling, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system. The O(1) algorithm in 2.6 up to 2.6.23 is based on a Multilevel Queue.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:27, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 00:24, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;m writing on a contrast of the CFS scheduler right now, please don&#039;t edit it.&lt;br /&gt;
&lt;br /&gt;
In contrast the the O(1) scheduler, CFS realizes the model of a scheduler which can execute precise on real multitasking on real hardware. Precise multitasking means that each process can run at equal speed. If 4 processes are running at the same time, CFS assigns 25% of the CPU time to each process. On real hardware, only one task can be executed at a time and other tasks have to wait, which gives the running tasks an unfair amount of CPU time. &lt;br /&gt;
&lt;br /&gt;
To avoid an unfair balance over the processes, CFS has a wait run-time for each process. CFS tries to pick the process with the highest wait run-time value. To provide a real multitasking, CFS splits up the CPU time between running processes. This allows multiple processes to parallel on a single CPU.&lt;br /&gt;
&lt;br /&gt;
Processes are not stored in a run queue, such in the O(1) scheduler, but in a self-balancing red-black tree, where self-balancing means that the task with the highest need for CPU time is stored in the most left node.  Tasks with a lower need for CPU time are stored on the right side of the Tree, where tasks with a higher need for CPU time are stored on the left side. The task on the left side is picked by the scheduler and put in a virtual runtime. If the process is ready to run, it is given CPU time to run. The tree re-balances itself and new tasks can be taken out by the CPU.&lt;br /&gt;
&lt;br /&gt;
CFS is designed in a way that it does not need to do timeslicing on the CPU, and still provide most performance with as much CPU utilization. This is due to the nanosecond granularity, which removes the need for jiffies or other HZ details. [http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 16:32, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey guys, sorry I&#039;ve been non-existent for the past little bit, here&#039;s what I&#039;ve done so far. I&#039;ve been going through stuff on the 4BSD and ULE schedulers, here&#039;s what I have so far:&lt;br /&gt;
&lt;br /&gt;
In order for FreeBSD to function, it requires a scheduler to be selected at the time the kernel is built. Also, all calls to scheduling code are resolved at compile time, meaning that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
&lt;br /&gt;
[3] The 4BSD scheduler was a general-purpose scheduler. Its primary goal was to balance threads’ different scheduling requirements. FreeBSD&#039;s time-share-scheduling algorithm is based on multilevel feedback queues. The system adjusts the priority of a thread dynamically to reflect resource requirements and the amount consumed by the thread. Based on the thread&#039;s priority, it gets moved between run queues. When a new thread attains a higher priority than the currently running one, the system immediately switches to the new thread, if it&#039;s in user mode. Otherwise, the system switches as soon as the current thread leaves the kernel. The system scans the run queues in order of highest to lowest priority, and executes the first thread of the first non-empty run queue it finds. The system tailors it&#039;s short-term scheduling algorithm to favor user-interactive jobs by raising the priority of threads waiting for I/O for one or more seconds, and by lowering the priority of threads that hog up significant amounts of CPU time.&lt;br /&gt;
&lt;br /&gt;
[1] In older BSD systems, (and I mean old, as in 20 or so years ago), a 1 second quantum was used for the round-robin scheduling algorithm. Later, in BSD 4.2, it did rescheduling every 0.1 seconds, and priority re-computation every second, and these values haven’t changed since.  Round-robin scheduling is done by a timeout mechanism, which informs the clock interrupt driver to call a certain system routine after a specified interval. The subroutine to be called, in this case, causes the rescheduling and then resubmits a timeout to call itself again 0.1 sec later. The priority re-computation is also timed by a subroutine that resubmits a timeout for itself. &lt;br /&gt;
&lt;br /&gt;
The ULE Scheduler was first introduced in FreeBSD 5, however disabled by default in favor of the default 4BSD scheduler. It was not until FreeBSD 7.1 that the ULE scheduler became the new default. The ULE scheduler was an overhaul of the original scheduler, and allowed it support for symmetric multiprocessing (SMP), support for symmetric multithreading (SMT) on multi-core systems, and improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system. &lt;br /&gt;
&lt;br /&gt;
The ULE has a constant execution time of O(1), regardless of the number of threads. In addition, it also is careful to identify interactive tasks and give them the lowest latency response possible.  The scheduling components include several queues, two CPU load-balancing algorithms, an interactivity scorer, a CPU usage estimator, a priority calculator, and a slice calculator.&lt;br /&gt;
&lt;br /&gt;
Since ULE is an event driven scheduler, there is no periodic timer that adjusts thread priority to prevent starvation. Fairness is implemented by maintaining two queues: the current and the next queue. Each thread that is granted a CPU slice is assigned to either the current or next queue. Threads are then picked from the current queue in order of priority until it is empty, which is when the next and current queues are switched, and the process begins again. This guarantees that each thread will be given use of its slice once every two queue cycles, regardless of priority. Interrupt and real-time threads (and threads with these priority levels) are inserted onto the current queue, for they are of the highest priority. There is also an idle class of threads, which is checked only when there are no other runnable tasks.&lt;br /&gt;
&lt;br /&gt;
In order to promptly discover when a thread changes from interactive to non-interactive, the ULE scheduler uses Interactivity Scoring, a key part of the scheduler affecting the responsiveness of the system, and subsequently, user experience. An interactivity score is computed from the relationship between run and sleep time, using a formula that is out of scope for this project. Interactive threads usually have high sleep times because they are often waiting for user input. This usually is followed by quick bursts in CPU activity from processing the user&#039;s request.&lt;br /&gt;
&lt;br /&gt;
The ULE also implements a priority calculator, not to maintain fairness, but only to order the threads by priority. Only time-sharing threads use the priority calculator, the rest are allotted statically. ULE uses the interactivity score to determine the nice value of a thread, which will allow it to in turn decide upon its priority.&lt;br /&gt;
&lt;br /&gt;
The way that the ULE uses its nice values in combination with slices is by implementing a moving window of nice values allowed slices. The threads within the window are given a slice oppositely proportional to the difference between their nice value and the lowest recorded nice value. This results in smaller slices to nicer threads, which subsequently defines their amount of allotted CPU time. On x86, FreeBSD allows for a minimum slice value of 10ms, and a maximum of 140ms. Interactive tasks receive the smallest nice value, in order to more promptly find that the interactive task is no longer interacting.&lt;br /&gt;
&lt;br /&gt;
The ULE uses a CPU usage estimator to show roughly how much CPU a thread is using. It operates on an event driven basis. ULE keeps track of the number of clock ticks that occurred within a sliding window of the thread&#039;s execution time. The window slides to upwards of one second past threshold, and then back down to regulate the ratio of run time to sleep time.&lt;br /&gt;
&lt;br /&gt;
The ULE enables SMP (Symmetric Multiprocessing) in order to greater achieve CPU affinity, which is when you schedule threads onto the last CPU they were run on, as to avoid unnecessary CPU migrations. Processors typically have large caches to aid the performance of threads and processes. CPU affinity is key because the thread may still have leftover data in the caches of the previous CPU, and when a thread migrates, it not only has to load this data into the new CPU&#039;s cache, but must also clear the data from the previous CPU&#039;s cache. ULE has two methods for CPU load balancing: pull and push. Pull is when an idle CPU grabs a thread from a non-idle CPU to lend a hand. Push is when a periodic task evaluates the current load situation of the CPUs and balances it out amongst them. These two function side by side, and allow for an optimally balanced CPU workload.&lt;br /&gt;
&lt;br /&gt;
SMT (Symmetric Multithreading), a concept of non-uniform processors, is not fully present in ULE. The foundations are there, however, which can eventually be extended to support NUMA (Non-Uniform Memory Architecture). This involves expressing the penalties of CPU migration through separate queues, which could be extended to add a local and global load-balancing policy. As far as my sources go, FreeBSD does not at this point support NUMA, however the groundwork is there, and it is a real possibility for it to appear in a future version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1 = http://www.cim.mcgill.ca/~franco/OpSys-304-427/lecture-notes/node46.html&lt;br /&gt;
2 = http://security.freebsd.org/advisories/FreeBSD-EN-10:02.sched_ule.asc&lt;br /&gt;
3 = McKusick, M. K. and Neville-Neil, G. V. 2004. Thread Scheduling in FreeBSD 5.2. Queue 2, 7 (Oct. 2004), 58-64. DOI= http://doi.acm.org/10.1145/1035594.1035622&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notes: Lots of this is just paraphrasing stuff you guys said in the discussion section. In terms of citations, should it be a superscripted citation next to the fact snippet we used, or should it just be a list of sources at the bottom?&lt;br /&gt;
&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 23:27, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I would agree with putting superscripted citations that refer to the Sources section? How do they do it in the wikipedia? &lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 18:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Superscripted citations seems to be the best way to do it. If we cite URLs throughout the essay, it will be much harder to read. To put in a superscripted citation, enclose the URL of your source in square brackets.&lt;br /&gt;
&lt;br /&gt;
Also, who here is actually good at writing, and can compile all these paragraphs into one nice essay for us? I think we have enough raw information here, it&#039;s just a matter of putting it all together now.&lt;br /&gt;
&lt;br /&gt;
-- [[abondio2|Austin Bondio]] 20:39, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Abhinav is putting something together right now on the main page. &lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 20:56, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Hi, here&#039;s a little forward on schedulers in relation to types of threads I&#039;ve composed based off of one of my sources, I&#039;m not sure if its necessary since there is one Mike typed above, but here it just for you guys to examine:&lt;br /&gt;
&lt;br /&gt;
Threads that perform a lot of I/O require a fast response time to keep input and output devices busy, but need little CPU time. On the other hand, compute-bound threads need to receive a lot of CPU time to finish their work, but have no requirement for fast response time. Other threads lie somewhere in between, with periods of I/O punctuated by periods of computation, and thus have requirements that vary over time. A well-designed scheduler should be able accommodate threads with all these requirements simultaneously.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Also: as Mike said earlier about BSD&#039;s issue with locking mechanisms, should I go into greater detail about that, or just include a little, few sentence description of the issue? I&#039;ve found a source for what I think is what he was referring to: http://security.freebsd.org/advisories/FreeBSD-EN-10:02.sched_ule.asc&lt;br /&gt;
&lt;br /&gt;
I&#039;ll be posting more of what I&#039;ve got on the BSD stuff under the hour.&lt;br /&gt;
&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 22:59, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Sources =&lt;br /&gt;
&lt;br /&gt;
[1] http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
[2] http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&lt;br /&gt;
&lt;br /&gt;
[3] http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726&lt;br /&gt;
&lt;br /&gt;
[4] http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3446</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3446"/>
		<updated>2010-10-14T00:14:48Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Essay Preview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Resources=&lt;br /&gt;
&lt;br /&gt;
I just moved the Resources section to our discussion page --[[User:AbsMechanik|AbsMechanik]] 18:19, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I found some resources, which might be useful to answer this question. As far as I know, FreeBSD uses a Multilevel feeback queue and Linux uses in the current version the completly fair scheduler.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Some text about FreeBSD-scheduling http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-ULE Thread Scheduler: http://www.scribd.com/doc/3299978/ULE-Thread-Scheduler-for-FreeBSD&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Completly Fair Scheduler: http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Brain Fuck Scheduler: http://en.wikipedia.org/wiki/Brain_Fuck_Scheduler&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Sebastian&lt;br /&gt;
&lt;br /&gt;
Also found a nice link with regards to the new Linux Scheduler for those interested:&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-scheduler/&lt;br /&gt;
&amp;lt;br /&amp;gt;It is also referred to as the O(1) scheduler in algorithmic terms (CFS is O(log(n)) scheduler). Both have been in development by Ingo Molnár.&lt;br /&gt;
-Abhinav&lt;br /&gt;
&lt;br /&gt;
Some more resources;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html (includes history of Linux scheduler from 1.2 to 2.6)&amp;lt;br /&amp;gt;&lt;br /&gt;
http://my.opera.com/blu3c4t/blog/show.dml/1531517 &amp;lt;br /&amp;gt;&lt;br /&gt;
-Wes&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
Information on changes to the O(1) scheduler:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Linux Kernel Documentation&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
General information on Linux Job Scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Linux Job Scheduling | Linux Journal&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.linuxjournal.com/article/4087&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Scheduling on multi-core Linux machines:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Node affine NUMA scheduler for Linux&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://home.arcor.de/efocht/sched/&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
More on Linux process scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Understanding the Linux kernel&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://oreilly.com/catalog/linuxkernel/chapter/ch10.html&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
FreeBSD thread scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;InformIT: FreeBSD Process Management&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&amp;lt;br /&amp;gt;&lt;br /&gt;
- Austin Bondio&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a part to introduce the several schedulers for LINUX. We might need to change the reference, since I got it all from http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:27, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write down our contact emails and names to write down who would like to write what part.&lt;br /&gt;
&lt;br /&gt;
Another suggestion is that someone should read over the text and compare it to the references posted in the &amp;quot;Sources&amp;quot; section and check if someone is doing plagiarism. &lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider - sebastian@gamersblog.ca&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be &lt;br /&gt;
efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea &lt;br /&gt;
of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was &lt;br /&gt;
the first scheduler which supported SMP.&lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its &lt;br /&gt;
predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a &lt;br /&gt;
scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task &lt;br /&gt;
did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute &lt;br /&gt;
longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and &lt;br /&gt;
did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware &lt;br /&gt;
architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
Linux-2.6 introduced another scheduler up to Linux 2.6.23. Before Linux 2.6.23 an O(1) scheduler was used. It needed the &lt;br /&gt;
same amount of time for each task to execute, independent of how big the tasks were.It kept track of the tasks in a &lt;br /&gt;
running queue. The scheduler offered much more scalability. To determine if a task was I/O bound or processor bound the &lt;br /&gt;
scheduler used interactive metrics with numerous heuristics. Because the code was difficult to manage and the most part of &lt;br /&gt;
the code was to calculate heuristics, it was replaced in Linux 2.6.23 with the CFS scheduler, which is the current &lt;br /&gt;
scheduler in the actual Linux versions.&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining &lt;br /&gt;
fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. &lt;br /&gt;
When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep &lt;br /&gt;
fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual &lt;br /&gt;
runtime.&lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing &lt;br /&gt;
and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. &lt;br /&gt;
Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu &lt;br /&gt;
are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The &lt;br /&gt;
scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted &lt;br /&gt;
into the red-black tree. This means  tasks on the left side are given time to execute, while the contents on the right side &lt;br /&gt;
of the tree are migrated to the left side to maintain fairness. [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:26, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve started writing a bit about the Linux O(1) scheduler:&lt;br /&gt;
&lt;br /&gt;
Under a Linux system, scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19.&lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.[http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt]&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.[http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here&#039;s something I put into the Linux: Overview section:&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969.[http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html] The early versions had relatively inefficient schedulers which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
There are five basic algorithms for allocating CPU time[http://en.wikipedia.org/wiki/Scheduling_(computing)#Scheduling_disciplines]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;First-in, First-out: No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Shortest Time Remaining: Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Fixed-Priority Preemptive Scheduling: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Round-Robin Scheduling: Fair multi-tasking. This method is similar in concept to Fixed-Priority Preemptive Scheduling, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Multilevel Queue Scheduling: Rule-based multi-taksing. This method is also similar to Fixed-Priority Preemptive Scheduling, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:27, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;m writing on a contrast of the CFS scheduler right now, please don&#039;t edit it.&lt;br /&gt;
&lt;br /&gt;
In contrast the the O(1) scheduler, CFS realizes the model of a scheduler which can execute precise on real multitasking on real hardware. Precise multitasking means that each process can run at equal speed. If 4 processes are running at the same time, CFS assigns 25% of the CPU time to each process. On real hardware, only one task can be executed at a time and other tasks have to wait, which gives the running tasks an unfair amount of CPU time. &lt;br /&gt;
&lt;br /&gt;
To avoid an unfair balance over the processes, CFS has a wait run-time for each process. CFS tries to pick the process with the highest wait run-time value. To provide a real multitasking, CFS splits up the CPU time between running processes. This allows multiple processes to parallel on a single CPU.&lt;br /&gt;
&lt;br /&gt;
Processes are not stored in a run queue, such in the O(1) scheduler, but in a self-balancing red-black tree, where self-balancing means that the task with the highest need for CPU time is stored in the most left node.  Tasks with a lower need for CPU time are stored on the right side of the Tree, where tasks with a higher need for CPU time are stored on the left side. The task on the left side is picked by the scheduler and put in a virtual runtime. If the process is ready to run, it is given CPU time to run. The tree re-balances itself and new tasks can be taken out by the CPU.&lt;br /&gt;
&lt;br /&gt;
CFS is designed in a way that it does not need to do timeslicing on the CPU, and still provide most performance with as much CPU utilization. This is due to the nanosecond granularity, which removes the need for jiffies or other HZ details. [http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 16:32, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey guys, sorry I&#039;ve been non-existent for the past little bit, here&#039;s what I&#039;ve done so far. I&#039;ve been going through stuff on the 4BSD and ULE schedulers, here&#039;s what I have so far:&lt;br /&gt;
&lt;br /&gt;
In order for FreeBSD to function, it requires a scheduler to be selected at the time the kernel is built. Also, all calls to scheduling code are resolved at compile time, meaning that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
&lt;br /&gt;
[3] The 4BSD scheduler was a general-purpose scheduler. Its primary goal was to balance threads’ different scheduling requirements. FreeBSD&#039;s time-share-scheduling algorithm is based on multilevel feedback queues. The system adjusts the priority of a thread dynamically to reflect resource requirements and the amount consumed by the thread. Based on the thread&#039;s priority, it gets moved between run queues. When a new thread attains a higher priority than the currently running one, the system immediately switches to the new thread, if it&#039;s in user mode. Otherwise, the system switches as soon as the current thread leaves the kernel. The system scans the run queues in order of highest to lowest priority, and executes the first thread of the first non-empty run queue it finds. The system tailors it&#039;s short-term scheduling algorithm to favor user-interactive jobs by raising the priority of threads waiting for I/O for one or more seconds, and by lowering the priority of threads that hog up significant amounts of CPU time.&lt;br /&gt;
&lt;br /&gt;
[1] In older BSD systems, (and I mean old, as in 20 or so years ago), a 1 second quantum was used for the round-robin scheduling algorithm. Later, in BSD 4.2, it did rescheduling every 0.1 seconds, and priority re-computation every second, and these values haven’t changed since.  Round-robin scheduling is done by a timeout mechanism, which informs the clock interrupt driver to call a certain system routine after a specified interval. The subroutine to be called, in this case, causes the rescheduling and then resubmits a timeout to call itself again 0.1 sec later. The priority re-computation is also timed by a subroutine that resubmits a timeout for itself. &lt;br /&gt;
&lt;br /&gt;
The ULE Scheduler was first introduced in FreeBSD 5, however disabled by default in favor of the default 4BSD scheduler. It was not until FreeBSD 7.1 that the ULE scheduler became the new default. The ULE scheduler was an overhaul of the original scheduler, and allowed it support for symmetric multiprocessing (SMP), support for symmetric multithreading (SMT) on multi-core systems, and improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system. &lt;br /&gt;
&lt;br /&gt;
The ULE has a constant execution time of O(1), regardless of the number of threads. In addition, it also is careful to identify interactive tasks and give them the lowest latency response possible.  The scheduling components include several queues, two CPU load-balancing algorithms, an interactivity scorer, a CPU usage estimator, a priority calculator, and a slice calculator.&lt;br /&gt;
&lt;br /&gt;
Since ULE is an event driven scheduler, there is no periodic timer that adjusts thread priority to prevent starvation. Fairness is implemented by maintaining two queues: the current and the next queue. Each thread that is granted a CPU slice is assigned to either the current or next queue. Threads are then picked from the current queue in order of priority until it is empty, which is when the next and current queues are switched, and the process begins again. This guarantees that each thread will be given use of its slice once every two queue cycles, regardless of priority. Interrupt and real-time threads (and threads with these priority levels) are inserted onto the current queue, for they are of the highest priority. There is also an idle class of threads, which is checked only when there are no other runnable tasks.&lt;br /&gt;
&lt;br /&gt;
In order to promptly discover when a thread changes from interactive to non-interactive, the ULE scheduler uses Interactivity Scoring, a key part of the scheduler affecting the responsiveness of the system, and subsequently, user experience. An interactivity score is computed from the relationship between run and sleep time, using a formula that is out of scope for this project. Interactive threads usually have high sleep times because they are often waiting for user input. This usually is followed by quick bursts in CPU activity from processing the user&#039;s request.&lt;br /&gt;
&lt;br /&gt;
The ULE also implements a priority calculator, not to maintain fairness, but only to order the threads by priority. Only time-sharing threads use the priority calculator, the rest are allotted statically. ULE uses the interactivity score to determine the nice value of a thread, which will allow it to in turn decide upon its priority.&lt;br /&gt;
&lt;br /&gt;
The way that the ULE uses its nice values in combination with slices is by implementing a moving window of nice values allowed slices. The threads within the window are given a slice oppositely proportional to the difference between their nice value and the lowest recorded nice value. This results in smaller slices to nicer threads, which subsequently defines their amount of allotted CPU time. On x86, FreeBSD allows for a minimum slice value of 10ms, and a maximum of 140ms. Interactive tasks receive the smallest nice value, in order to more promptly find that the interactive task is no longer interacting.&lt;br /&gt;
&lt;br /&gt;
The ULE uses a CPU usage estimator to show roughly how much CPU a thread is using. It operates on an event driven basis. ULE keeps track of the number of clock ticks that occurred within a sliding window of the thread&#039;s execution time. The window slides to upwards of one second past threshold, and then back down to regulate the ratio of run time to sleep time.&lt;br /&gt;
&lt;br /&gt;
The ULE enables SMP (Symmetric Multiprocessing) in order to greater achieve CPU affinity, which is when you schedule threads onto the last CPU they were run on, as to avoid unnecessary CPU migrations. Processors typically have large caches to aid the performance of threads and processes. CPU affinity is key because the thread may still have leftover data in the caches of the previous CPU, and when a thread migrates, it not only has to load this data into the new CPU&#039;s cache, but must also clear the data from the previous CPU&#039;s cache. ULE has two methods for CPU load balancing: pull and push. Pull is when an idle CPU grabs a thread from a non-idle CPU to lend a hand. Push is when a periodic task evaluates the current load situation of the CPUs and balances it out amongst them. These two function side by side, and allow for an optimally balanced CPU workload.&lt;br /&gt;
&lt;br /&gt;
SMT (Symmetric Multithreading), a concept of non-uniform processors, is not fully present in ULE. The foundations are there, however, which can eventually be extended to support NUMA (Non-Uniform Memory Architecture). This involves expressing the penalties of CPU migration through separate queues, which could be extended to add a local and global load-balancing policy. As far as my sources go, FreeBSD does not at this point support NUMA, however the groundwork is there, and it is a real possibility for it to appear in a future version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1 = http://www.cim.mcgill.ca/~franco/OpSys-304-427/lecture-notes/node46.html&lt;br /&gt;
2 = http://security.freebsd.org/advisories/FreeBSD-EN-10:02.sched_ule.asc&lt;br /&gt;
3 = McKusick, M. K. and Neville-Neil, G. V. 2004. Thread Scheduling in FreeBSD 5.2. Queue 2, 7 (Oct. 2004), 58-64. DOI= http://doi.acm.org/10.1145/1035594.1035622&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notes: Lots of this is just paraphrasing stuff you guys said in the discussion section. In terms of citations, should it be a superscripted citation next to the fact snippet we used, or should it just be a list of sources at the bottom?&lt;br /&gt;
&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 23:27, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I would agree with putting superscripted citations that refer to the Sources section? How do they do it in the wikipedia? &lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 18:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Superscripted citations seems to be the best way to do it. If we cite URLs throughout the essay, it will be much harder to read. To put in a superscripted citation, enclose the URL of your source in square brackets.&lt;br /&gt;
&lt;br /&gt;
Also, who here is actually good at writing, and can compile all these paragraphs into one nice essay for us? I think we have enough raw information here, it&#039;s just a matter of putting it all together now.&lt;br /&gt;
&lt;br /&gt;
-- [[abondio2|Austin Bondio]] 20:39, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Abhinav is putting something together right now on the main page. &lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 20:56, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Hi, here&#039;s a little forward on schedulers in relation to types of threads I&#039;ve composed based off of one of my sources, I&#039;m not sure if its necessary since there is one Mike typed above, but here it just for you guys to examine:&lt;br /&gt;
&lt;br /&gt;
Threads that perform a lot of I/O require a fast response time to keep input and output devices busy, but need little CPU time. On the other hand, compute-bound threads need to receive a lot of CPU time to finish their work, but have no requirement for fast response time. Other threads lie somewhere in between, with periods of I/O punctuated by periods of computation, and thus have requirements that vary over time. A well-designed scheduler should be able accommodate threads with all these requirements simultaneously.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Also: as Mike said earlier about BSD&#039;s issue with locking mechanisms, should I go into greater detail about that, or just include a little, few sentence description of the issue? I&#039;ve found a source for what I think is what he was referring to: http://security.freebsd.org/advisories/FreeBSD-EN-10:02.sched_ule.asc&lt;br /&gt;
&lt;br /&gt;
I&#039;ll be posting more of what I&#039;ve got on the BSD stuff under the hour.&lt;br /&gt;
&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 22:59, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Sources =&lt;br /&gt;
&lt;br /&gt;
[1] http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
[2] http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&lt;br /&gt;
&lt;br /&gt;
[3] http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726&lt;br /&gt;
&lt;br /&gt;
[4] http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3341</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3341"/>
		<updated>2010-10-13T20:56:29Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Essay Preview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Resources=&lt;br /&gt;
&lt;br /&gt;
I just moved the Resources section to our discussion page --[[User:AbsMechanik|AbsMechanik]] 18:19, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I found some resources, which might be useful to answer this question. As far as I know, FreeBSD uses a Multilevel feeback queue and Linux uses in the current version the completly fair scheduler.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Some text about FreeBSD-scheduling http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-ULE Thread Scheduler: http://www.scribd.com/doc/3299978/ULE-Thread-Scheduler-for-FreeBSD&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Completly Fair Scheduler: http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Brain Fuck Scheduler: http://en.wikipedia.org/wiki/Brain_Fuck_Scheduler&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Sebastian&lt;br /&gt;
&lt;br /&gt;
Also found a nice link with regards to the new Linux Scheduler for those interested:&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-scheduler/&lt;br /&gt;
&amp;lt;br /&amp;gt;It is also referred to as the O(1) scheduler in algorithmic terms (CFS is O(log(n)) scheduler). Both have been in development by Ingo Molnár.&lt;br /&gt;
-Abhinav&lt;br /&gt;
&lt;br /&gt;
Some more resources;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html (includes history of Linux scheduler from 1.2 to 2.6)&amp;lt;br /&amp;gt;&lt;br /&gt;
http://my.opera.com/blu3c4t/blog/show.dml/1531517 &amp;lt;br /&amp;gt;&lt;br /&gt;
-Wes&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
Information on changes to the O(1) scheduler:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Linux Kernel Documentation&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
General information on Linux Job Scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Linux Job Scheduling | Linux Journal&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.linuxjournal.com/article/4087&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Scheduling on multi-core Linux machines:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Node affine NUMA scheduler for Linux&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://home.arcor.de/efocht/sched/&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
More on Linux process scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Understanding the Linux kernel&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://oreilly.com/catalog/linuxkernel/chapter/ch10.html&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
FreeBSD thread scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;InformIT: FreeBSD Process Management&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&amp;lt;br /&amp;gt;&lt;br /&gt;
- Austin Bondio&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a part to introduce the several schedulers for LINUX. We might need to change the reference, since I got it all from http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:27, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write down our contact emails and names to write down who would like to write what part.&lt;br /&gt;
&lt;br /&gt;
Another suggestion is that someone should read over the text and compare it to the references posted in the &amp;quot;Sources&amp;quot; section and check if someone is doing plagiarism. &lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider - sebastian@gamersblog.ca&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi, here&#039;s a little forward on schedulers in relation to types of threads I&#039;ve composed based off of one of my sources, I&#039;m not sure if its necessary since there is one Mike typed below, but here it just for you guys to examine:&lt;br /&gt;
&lt;br /&gt;
Threads that perform a lot of I/O require a fast response time to keep input and output devices busy, but need little CPU time. On the other hand, compute-bound threads need to receive a lot of CPU time to finish their work, but have no requirement for fast response time. Other threads lie somewhere in between, with periods of I/O punctuated by periods of computation, and thus have requirements that vary over time. A well-designed scheduler should be able accommodate threads with all these requirements simultaneously.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Also: as Mike said earlier about BSD&#039;s issue with locking mechanisms, should I go into greater detail about that, or just include a little, few sentence description of the issue? I&#039;ve found a source for what I think is what he was referring to: http://security.freebsd.org/advisories/FreeBSD-EN-10:02.sched_ule.asc&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 17:54, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be &lt;br /&gt;
efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea &lt;br /&gt;
of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was &lt;br /&gt;
the first scheduler which supported SMP.&lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its &lt;br /&gt;
predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a &lt;br /&gt;
scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task &lt;br /&gt;
did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute &lt;br /&gt;
longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and &lt;br /&gt;
did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware &lt;br /&gt;
architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
Linux-2.6 introduced another scheduler up to Linux 2.6.23. Before Linux 2.6.23 an O(1) scheduler was used. It needed the &lt;br /&gt;
same amount of time for each task to execute, independent of how big the tasks were.It kept track of the tasks in a &lt;br /&gt;
running queue. The scheduler offered much more scalability. To determine if a task was I/O bound or processor bound the &lt;br /&gt;
scheduler used interactive metrics with numerous heuristics. Because the code was difficult to manage and the most part of &lt;br /&gt;
the code was to calculate heuristics, it was replaced in Linux 2.6.23 with the CFS scheduler, which is the current &lt;br /&gt;
scheduler in the actual Linux versions.&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining &lt;br /&gt;
fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. &lt;br /&gt;
When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep &lt;br /&gt;
fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual &lt;br /&gt;
runtime.&lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing &lt;br /&gt;
and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. &lt;br /&gt;
Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu &lt;br /&gt;
are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The &lt;br /&gt;
scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted &lt;br /&gt;
into the red-black tree. This means  tasks on the left side are given time to execute, while the contents on the right side &lt;br /&gt;
of the tree are migrated to the left side to maintain fairness. [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:26, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve started writing a bit about the Linux O(1) scheduler:&lt;br /&gt;
&lt;br /&gt;
Under a Linux system, scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19.&lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.[http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt]&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.[http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726]&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 18:39, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;m writing on a contrast of the CFS scheduler right now, please don&#039;t edit it.&lt;br /&gt;
&lt;br /&gt;
In contrast the the O(1) scheduler, CFS realizes the model of a scheduler which can execute precise on real multitasking on real hardware. Precise multitasking means that each process can run at equal speed. If 4 processes are running at the same time, CFS assigns 25% of the CPU time to each process. On real hardware, only one task can be executed at a time and other tasks have to wait, which gives the running tasks an unfair amount of CPU time.&lt;br /&gt;
&lt;br /&gt;
To avoid an unfair balance over the processes, CFS has a wait run-time for each process. CFS tries to pick the process with the highest wait run-time value. To provide a real multitasking, CFS splits up the CPU time between running processes. &lt;br /&gt;
&lt;br /&gt;
Processes are not stored in a run queue, such in the O(1) scheduler, but in a self-balancing red-black tree, where self-balancing means that the task with the highest need for CPU time is stored in the most left node.  Tasks with a lower need for CPU time are stored on the right side of the Tree, where tasks with a higher need for CPU time are stored on the left side. The task on the left side is picked by the scheduler and put in a virtual runtime. If the process is ready to run, it is given CPU time to run. The tree re-balances itself and new tasks can be taken out by the CPU.&lt;br /&gt;
&lt;br /&gt;
CFS is designed in a way that it does not need timeslicing and still provide most performance with as much cpu utilization. This is due to the nanosecond granularity, which removes the need for jiffies or other HZ details. [http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 16:32, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey guys, sorry I&#039;ve been non-existent for the past little bit, here&#039;s what I&#039;ve done so far. I&#039;ve been going through stuff on the 4BSD and ULE schedulers, here&#039;s what I have so far:&lt;br /&gt;
&lt;br /&gt;
In order for FreeBSD to function, it requires a scheduler to be selected at the time the kernel is built. Also, all calls to scheduling code are resolved at compile time, meaning that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
&lt;br /&gt;
[3] The 4BSD scheduler was a general-purpose scheduler. Its primary goal was to balance threads’ different scheduling requirements. FreeBSD&#039;s time-share-scheduling algorithm is based on multilevel feedback queues. The system adjusts the priority of a thread dynamically to reflect resource requirements and the amount consumed by the thread. Based on the thread&#039;s priority, it gets moved between run queues. When a new thread attains a higher priority than the currently running one, the system immediately switches to the new thread, if it&#039;s in user mode. Otherwise, the system switches as soon as the current thread leaves the kernel. The system scans the run queues in order of highest to lowest priority, and executes the first thread of the first non-empty run queue it finds. The system tailors it&#039;s short-term scheduling algorithm to favor user-interactive jobs by raising the priority of threads waiting for I/O for one or more seconds, and by lowering the priority of threads that hog up significant amounts of CPU time.&lt;br /&gt;
&lt;br /&gt;
[1] In older BSD systems, (and I mean old, as in 20 or so years ago), a 1 second quantum was used for the round-robin scheduling algorithm. Later, in BSD 4.2, it did rescheduling every 0.1 seconds, and priority re-computation every second, and these values haven’t changed since.  Round-robin scheduling is done by a timeout mechanism, which informs the clock interrupt driver to call a certain system routine after a specified interval. The subroutine to be called, in this case, causes the rescheduling and then resubmits a timeout to call itself again 0.1 sec later. The priority re-computation is also timed by a subroutine that resubmits a timeout for itself. &lt;br /&gt;
&lt;br /&gt;
The ULE Scheduler was first introduced in FreeBSD 5, however disabled by default in favor of the default 4BSD scheduler. It was not until FreeBSD 7.1 that the ULE scheduler became the new default. The ULE scheduler was an overhaul of the original scheduler, and allowed it support for symmetric multiprocessing (SMP), support for symmetric multithreading (SMT) on multi-core systems, and improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&amp;lt;more to come&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1 = http://www.cim.mcgill.ca/~franco/OpSys-304-427/lecture-notes/node46.html&lt;br /&gt;
2 = http://security.freebsd.org/advisories/FreeBSD-EN-10:02.sched_ule.asc&lt;br /&gt;
3 = McKusick, M. K. and Neville-Neil, G. V. 2004. Thread Scheduling in FreeBSD 5.2. Queue 2, 7 (Oct. 2004), 58-64. DOI= http://doi.acm.org/10.1145/1035594.1035622&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notes: Lots of this is just paraphrasing stuff you guys said in the discussion section. In terms of citations, should it be a superscripted citation next to the fact snippet we used, or should it just be a list of sources at the bottom?&lt;br /&gt;
&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 17:51, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I would agree with putting superscripted citations that refer to the Sources section? How do they do it in the wikipedia? &lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 18:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Superscripted citations seems to be the best way to do it. If we cite URLs throughout the essay, it will be much harder to read. To put in a superscripted citation, enclose the URL of your source in square brackets.&lt;br /&gt;
&lt;br /&gt;
Also, who here is actually good at writing, and can compile all these paragraphs into one nice essay for us? I think we have enough raw information here, it&#039;s just a matter of putting it all together now.&lt;br /&gt;
&lt;br /&gt;
-- [[abondio2|Austin Bondio]] 20:39, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Abhinav is putting something together right now on the main page. &lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 20:56, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Sources =&lt;br /&gt;
&lt;br /&gt;
[1] http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
[2] http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&lt;br /&gt;
&lt;br /&gt;
[3] http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726&lt;br /&gt;
&lt;br /&gt;
[4] http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3336</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3336"/>
		<updated>2010-10-13T20:35:06Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Essay Preview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Resources=&lt;br /&gt;
&lt;br /&gt;
I just moved the Resources section to our discussion page --[[User:AbsMechanik|AbsMechanik]] 18:19, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I found some resources, which might be useful to answer this question. As far as I know, FreeBSD uses a Multilevel feeback queue and Linux uses in the current version the completly fair scheduler.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Some text about FreeBSD-scheduling http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-ULE Thread Scheduler: http://www.scribd.com/doc/3299978/ULE-Thread-Scheduler-for-FreeBSD&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Completly Fair Scheduler: http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Brain Fuck Scheduler: http://en.wikipedia.org/wiki/Brain_Fuck_Scheduler&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Sebastian&lt;br /&gt;
&lt;br /&gt;
Also found a nice link with regards to the new Linux Scheduler for those interested:&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-scheduler/&lt;br /&gt;
&amp;lt;br /&amp;gt;It is also referred to as the O(1) scheduler in algorithmic terms (CFS is O(log(n)) scheduler). Both have been in development by Ingo Molnár.&lt;br /&gt;
-Abhinav&lt;br /&gt;
&lt;br /&gt;
Some more resources;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html (includes history of Linux scheduler from 1.2 to 2.6)&amp;lt;br /&amp;gt;&lt;br /&gt;
http://my.opera.com/blu3c4t/blog/show.dml/1531517 &amp;lt;br /&amp;gt;&lt;br /&gt;
-Wes&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
Information on changes to the O(1) scheduler:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Linux Kernel Documentation&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
General information on Linux Job Scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Linux Job Scheduling | Linux Journal&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.linuxjournal.com/article/4087&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Scheduling on multi-core Linux machines:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Node affine NUMA scheduler for Linux&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://home.arcor.de/efocht/sched/&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
More on Linux process scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Understanding the Linux kernel&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://oreilly.com/catalog/linuxkernel/chapter/ch10.html&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
FreeBSD thread scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;InformIT: FreeBSD Process Management&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&amp;lt;br /&amp;gt;&lt;br /&gt;
- Austin Bondio&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a part to introduce the several schedulers for LINUX. We might need to change the reference, since I got it all from http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:27, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write down our contact emails and names to write down who would like to write what part.&lt;br /&gt;
&lt;br /&gt;
Another suggestion is that someone should read over the text and compare it to the references posted in the &amp;quot;Sources&amp;quot; section and check if someone is doing plagiarism. &lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider - sebastian@gamersblog.ca&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi, here&#039;s a little forward on schedulers in relation to types of threads I&#039;ve composed based off of one of my sources, I&#039;m not sure if its necessary since there is one Mike typed below, but here it just for you guys to examine:&lt;br /&gt;
&lt;br /&gt;
Threads that perform a lot of I/O require a fast response time to keep input and output devices busy, but need little CPU time. On the other hand, compute-bound threads need to receive a lot of CPU time to finish their work, but have no requirement for fast response time. Other threads lie somewhere in between, with periods of I/O punctuated by periods of computation, and thus have requirements that vary over time. A well-designed scheduler should be able accommodate threads with all these requirements simultaneously.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Also: as Mike said earlier about BSD&#039;s issue with locking mechanisms, should I go into greater detail about that, or just include a little, few sentence description of the issue? I&#039;ve found a source for what I think is what he was referring to: http://security.freebsd.org/advisories/FreeBSD-EN-10:02.sched_ule.asc&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 17:54, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be &lt;br /&gt;
efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea &lt;br /&gt;
of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was &lt;br /&gt;
the first scheduler which supported SMP.&lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its &lt;br /&gt;
predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a &lt;br /&gt;
scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task &lt;br /&gt;
did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute &lt;br /&gt;
longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and &lt;br /&gt;
did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware &lt;br /&gt;
architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
Linux-2.6 introduced another scheduler up to Linux 2.6.23. Before Linux 2.6.23 an O(1) scheduler was used. It needed the &lt;br /&gt;
same amount of time for each task to execute, independent of how big the tasks were.It kept track of the tasks in a &lt;br /&gt;
running queue. The scheduler offered much more scalability. To determine if a task was I/O bound or processor bound the &lt;br /&gt;
scheduler used interactive metrics with numerous heuristics. Because the code was difficult to manage and the most part of &lt;br /&gt;
the code was to calculate heuristics, it was replaced in Linux 2.6.23 with the CFS scheduler, which is the current &lt;br /&gt;
scheduler in the actual Linux versions.&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining &lt;br /&gt;
fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. &lt;br /&gt;
When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep &lt;br /&gt;
fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual &lt;br /&gt;
runtime.&lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing &lt;br /&gt;
and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. &lt;br /&gt;
Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu &lt;br /&gt;
are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The &lt;br /&gt;
scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted &lt;br /&gt;
into the red-black tree. This means  tasks on the left side are given time to execute, while the contents on the right side &lt;br /&gt;
of the tree are migrated to the left side to maintain fairness. [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:26, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve started writing a bit about the Linux O(1) scheduler:&lt;br /&gt;
&lt;br /&gt;
Under a Linux system, scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19.&lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.[http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt]&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.[http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726]&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 14:39, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;m writing on a contrast of the CFS scheduler right now, please don&#039;t edit it.&lt;br /&gt;
&lt;br /&gt;
In contrast the the O(1) scheduler, CFS realizes the model of a scheduler which can execute precise on real multitasking on real hardware. Precise multitasking means that each process can run at equal speed. If 4 processes are running at the same time, CFS assigns 25% of the CPU time to each process. On real hardware, only one task can be executed at a time and other tasks have to wait, which gives the running tasks an unfair amount of CPU time.&lt;br /&gt;
&lt;br /&gt;
To avoid an unfair balance over the processes, CFS has a wait run-time for each process. CFS tries to pick the process with the highest wait run-time value. To provide a real multitasking, CFS splits up the CPU time between running processes. &lt;br /&gt;
&lt;br /&gt;
Processes are not stored in a run queue, such in the O(1) scheduler, but in a self-balancing red-black tree, where self-balancing means that the task with the highest need for CPU time is stored in the most left node.  Tasks with a lower need for CPU time are stored on the right side of the Tree, where tasks with a higher need for CPU time are stored on the left side. The task on the left side is picked by the scheduler and put in a virtual runtime. If the process is ready to run, it is given CPU time to run. The tree re-balances itself and new tasks can be taken out by the CPU.&lt;br /&gt;
&lt;br /&gt;
CFS is designed in a way that it does not need timeslicing and still provide most performance with as much cpu utilization. This is due to the nanosecond granularity, which removes the need for jiffies or other HZ details. [http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 16:32, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey guys, sorry I&#039;ve been non-existent for the past little bit, here&#039;s what I&#039;ve done so far. I&#039;ve been going through stuff on the 4BSD and ULE schedulers, here&#039;s what I have so far:&lt;br /&gt;
&lt;br /&gt;
In order for FreeBSD to function, it requires a scheduler to be selected at the time the kernel is built. Also, all calls to scheduling code are resolved at compile time, meaning that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
&lt;br /&gt;
[3] The 4BSD scheduler was a general-purpose scheduler. Its primary goal was to balance threads’ different scheduling requirements. FreeBSD&#039;s time-share-scheduling algorithm is based on multilevel feedback queues. The system adjusts the priority of a thread dynamically to reflect resource requirements and the amount consumed by the thread. Based on the thread&#039;s priority, it gets moved between run queues. When a new thread attains a higher priority than the currently running one, the system immediately switches to the new thread, if it&#039;s in user mode. Otherwise, the system switches as soon as the current thread leaves the kernel. The system scans the run queues in order of highest to lowest priority, and executes the first thread of the first non-empty run queue it finds. The system tailors it&#039;s short-term scheduling algorithm to favor user-interactive jobs by raising the priority of threads waiting for I/O for one or more seconds, and by lowering the priority of threads that hog up significant amounts of CPU time.&lt;br /&gt;
&lt;br /&gt;
[1] In older BSD systems, (and I mean old, as in 20 or so years ago), a 1 second quantum was used for the round-robin scheduling algorithm. Later, in BSD 4.2, it did rescheduling every 0.1 seconds, and priority re-computation every second, and these values haven’t changed since.  Round-robin scheduling is done by a timeout mechanism, which informs the clock interrupt driver to call a certain system routine after a specified interval. The subroutine to be called, in this case, causes the rescheduling and then resubmits a timeout to call itself again 0.1 sec later. The priority re-computation is also timed by a subroutine that resubmits a timeout for itself. &lt;br /&gt;
&lt;br /&gt;
The ULE Scheduler was first introduced in FreeBSD 5, however disabled by default in favor of the default 4BSD scheduler. It was not until FreeBSD 7.1 that the ULE scheduler became the new default. The ULE scheduler was an overhaul of the original scheduler, and allowed it support for symmetric multiprocessing (SMP), support for symmetric multithreading (SMT) on multi-core systems, and improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&amp;lt;more to come&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1 = http://www.cim.mcgill.ca/~franco/OpSys-304-427/lecture-notes/node46.html&lt;br /&gt;
2 = http://security.freebsd.org/advisories/FreeBSD-EN-10:02.sched_ule.asc&lt;br /&gt;
3 = McKusick, M. K. and Neville-Neil, G. V. 2004. Thread Scheduling in FreeBSD 5.2. Queue 2, 7 (Oct. 2004), 58-64. DOI= http://doi.acm.org/10.1145/1035594.1035622&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notes: Lots of this is just paraphrasing stuff you guys said in the discussion section. In terms of citations, should it be a superscripted citation next to the fact snippet we used, or should it just be a list of sources at the bottom?&lt;br /&gt;
&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 17:51, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I would agree with putting superscripted citations that refer to the Sources section? How do they do it in the wikipedia? &lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 18:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Sources =&lt;br /&gt;
&lt;br /&gt;
[1] http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
[2] http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&lt;br /&gt;
&lt;br /&gt;
[3] http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726&lt;br /&gt;
&lt;br /&gt;
[4] http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3317</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3317"/>
		<updated>2010-10-13T18:52:19Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Essay Preview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Resources=&lt;br /&gt;
&lt;br /&gt;
I just moved the Resources section to our discussion page --[[User:AbsMechanik|AbsMechanik]] 18:19, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I found some resources, which might be useful to answer this question. As far as I know, FreeBSD uses a Multilevel feeback queue and Linux uses in the current version the completly fair scheduler.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Some text about FreeBSD-scheduling http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-ULE Thread Scheduler: http://www.scribd.com/doc/3299978/ULE-Thread-Scheduler-for-FreeBSD&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Completly Fair Scheduler: http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Brain Fuck Scheduler: http://en.wikipedia.org/wiki/Brain_Fuck_Scheduler&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Sebastian&lt;br /&gt;
&lt;br /&gt;
Also found a nice link with regards to the new Linux Scheduler for those interested:&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-scheduler/&lt;br /&gt;
&amp;lt;br /&amp;gt;It is also referred to as the O(1) scheduler in algorithmic terms (CFS is O(log(n)) scheduler). Both have been in development by Ingo Molnár.&lt;br /&gt;
-Abhinav&lt;br /&gt;
&lt;br /&gt;
Some more resources;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html (includes history of Linux scheduler from 1.2 to 2.6)&amp;lt;br /&amp;gt;&lt;br /&gt;
http://my.opera.com/blu3c4t/blog/show.dml/1531517 &amp;lt;br /&amp;gt;&lt;br /&gt;
-Wes&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
Information on changes to the O(1) scheduler:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Linux Kernel Documentation&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
General information on Linux Job Scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Linux Job Scheduling | Linux Journal&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.linuxjournal.com/article/4087&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Scheduling on multi-core Linux machines:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Node affine NUMA scheduler for Linux&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://home.arcor.de/efocht/sched/&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
More on Linux process scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Understanding the Linux kernel&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://oreilly.com/catalog/linuxkernel/chapter/ch10.html&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
FreeBSD thread scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;InformIT: FreeBSD Process Management&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&amp;lt;br /&amp;gt;&lt;br /&gt;
- Austin Bondio&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a part to introduce the several schedulers for LINUX. We might need to change the reference, since I got it all from http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:27, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write down our contact emails and names to write down who would like to write what part.&lt;br /&gt;
&lt;br /&gt;
Another suggestion is that someone should read over the text and compare it to the references posted in the &amp;quot;Sources&amp;quot; section and check if someone is doing plagiarism. &lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider - sebastian@gamersblog.ca&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi, here&#039;s a little forward on schedulers in relation to types of threads I&#039;ve composed based off of one of my sources, I&#039;m not sure if its necessary since there is one Mike typed below, but here it just for you guys to examine:&lt;br /&gt;
&lt;br /&gt;
Threads that perform a lot of I/O require a fast response time to keep input and output devices busy, but need little CPU time. On the other hand, compute-bound threads need to receive a lot of CPU time to finish their work, but have no requirement for fast response time. Other threads lie somewhere in between, with periods of I/O punctuated by periods of computation, and thus have requirements that vary over time. A well-designed scheduler should be able accommodate threads with all these requirements simultaneously.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Also: as Mike said earlier about BSD&#039;s issue with locking mechanisms, should I go into greater detail about that, or just include a little, few sentence description of the issue? I&#039;ve found a source for what I think is what he was referring to: http://security.freebsd.org/advisories/FreeBSD-EN-10:02.sched_ule.asc&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 17:54, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be &lt;br /&gt;
efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea &lt;br /&gt;
of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was &lt;br /&gt;
the first scheduler which supported SMP.&lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its &lt;br /&gt;
predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a &lt;br /&gt;
scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task &lt;br /&gt;
did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute &lt;br /&gt;
longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and &lt;br /&gt;
did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware &lt;br /&gt;
architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
Linux-2.6 introduced another scheduler up to Linux 2.6.23. Before Linux 2.6.23 an O(1) scheduler was used. It needed the &lt;br /&gt;
same amount of time for each task to execute, independent of how big the tasks were.It kept track of the tasks in a &lt;br /&gt;
running queue. The scheduler offered much more scalability. To determine if a task was I/O bound or processor bound the &lt;br /&gt;
scheduler used interactive metrics with numerous heuristics. Because the code was difficult to manage and the most part of &lt;br /&gt;
the code was to calculate heuristics, it was replaced in Linux 2.6.23 with the CFS scheduler, which is the current &lt;br /&gt;
scheduler in the actual Linux versions.&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining &lt;br /&gt;
fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. &lt;br /&gt;
When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep &lt;br /&gt;
fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual &lt;br /&gt;
runtime.&lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing &lt;br /&gt;
and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. &lt;br /&gt;
Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu &lt;br /&gt;
are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The &lt;br /&gt;
scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted &lt;br /&gt;
into the red-black tree. This means  tasks on the left side are given time to execute, while the contents on the right side &lt;br /&gt;
of the tree are migrated to the left side to maintain fairness. [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:26, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve started writing a bit about the Linux O(1) scheduler:&lt;br /&gt;
&lt;br /&gt;
Under a Linux system, scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19.&lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.[http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt]&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.[http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726]&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 14:39, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;m writing on a contrast of the CFS scheduler right now, please don&#039;t edit it.&lt;br /&gt;
&lt;br /&gt;
In contrast the the O(1) scheduler, CFS realizes the model of a scheduler which can execute precise on real multitasking on real hardware. Precise multitasking means that each process can run at equal speed. If 4 processes are running at the same time, CFS assigns 25% of the CPU time to each process. On real hardware, only one task can be executed at a time and other tasks have to wait, which gives the running tasks an unfair amount of CPU time.&lt;br /&gt;
&lt;br /&gt;
To avoid an unfair balance over the processes, CFS has a wait run-time for each process. CFS tries to pick the process with the highest wait run-time value. To provide a real multitasking, CFS splits up the CPU time between running processes. &lt;br /&gt;
&lt;br /&gt;
Processes are not stored in a run queue, but in a self-balancing red-black tree, where self-balancing means that the task with the highest need for CPU time is stored in the most left node.  Tasks with a lower need for CPU time are stored on the right side of the Tree, where tasks with a higher need for CPU time are stored on the left side.  The task on the left side is picked by the scheduler and given CPU time to run. The tree re-balances itself  and new tasks can be inserted.&lt;br /&gt;
&lt;br /&gt;
CFS is designed in a way that it does not need timeslicing. This is due to the nanosecond granularity, which removes the need for jiffies or other HZ details. [http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt]&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 16:32, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey guys, sorry I&#039;ve been non-existent for the past little bit, here&#039;s what I&#039;ve done so far. I&#039;ve been going through stuff on the 4BSD and ULE schedulers, here&#039;s what I have so far:&lt;br /&gt;
&lt;br /&gt;
In order for FreeBSD to function, it requires a scheduler to be selected at the time the kernel is built. Also, all calls to scheduling code are resolved at compile time, meaning that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
&lt;br /&gt;
[3] The 4BSD scheduler was a general-purpose scheduler. Its primary goal was to balance threads’ different scheduling requirements. FreeBSD&#039;s time-share-scheduling algorithm is based on multilevel feedback queues. The system adjusts the priority of a thread dynamically to reflect resource requirements and the amount consumed by the thread. Based on the thread&#039;s priority, it gets moved between run queues. When a new thread attains a higher priority than the currently running one, the system immediately switches to the new thread, if it&#039;s in user mode. Otherwise, the system switches as soon as the current thread leaves the kernel. The system scans the run queues in order of highest to lowest priority, and executes the first thread of the first non-empty run queue it finds. The system tailors it&#039;s short-term scheduling algorithm to favor user-interactive jobs by raising the priority of threads waiting for I/O for one or more seconds, and by lowering the priority of threads that hog up significant amounts of CPU time.&lt;br /&gt;
&lt;br /&gt;
[1] In older BSD systems, (and I mean old, as in 20 or so years ago), a 1 second quantum was used for the round-robin scheduling algorithm. Later, in BSD 4.2, it did rescheduling every 0.1 seconds, and priority re-computation every second, and these values haven’t changed since.  Round-robin scheduling is done by a timeout mechanism, which informs the clock interrupt driver to call a certain system routine after a specified interval. The subroutine to be called, in this case, causes the rescheduling and then resubmits a timeout to call itself again 0.1 sec later. The priority re-computation is also timed by a subroutine that resubmits a timeout for itself. &lt;br /&gt;
&lt;br /&gt;
The ULE Scheduler was first introduced in FreeBSD 5, however disabled by default in favor of the default 4BSD scheduler. It was not until FreeBSD 7.1 that the ULE scheduler became the new default. The ULE scheduler was an overhaul of the original scheduler, and allowed it support for symmetric multiprocessing (SMP), support for symmetric multithreading (SMT) on multi-core systems, and improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&amp;lt;more to come&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1 = http://www.cim.mcgill.ca/~franco/OpSys-304-427/lecture-notes/node46.html&lt;br /&gt;
2 = http://security.freebsd.org/advisories/FreeBSD-EN-10:02.sched_ule.asc&lt;br /&gt;
3 = McKusick, M. K. and Neville-Neil, G. V. 2004. Thread Scheduling in FreeBSD 5.2. Queue 2, 7 (Oct. 2004), 58-64. DOI= http://doi.acm.org/10.1145/1035594.1035622&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notes: Lots of this is just paraphrasing stuff you guys said in the discussion section. In terms of citations, should it be a superscripted citation next to the fact snippet we used, or should it just be a list of sources at the bottom?&lt;br /&gt;
&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 17:51, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I would agree with putting superscripted citations that refer to the Sources section? How do they do it in the wikipedia? &lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 18:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Sources =&lt;br /&gt;
&lt;br /&gt;
[1] http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
[2] http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&lt;br /&gt;
&lt;br /&gt;
[3] http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726&lt;br /&gt;
&lt;br /&gt;
[4] http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3277</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3277"/>
		<updated>2010-10-13T16:43:25Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a part to introduce the several schedulers for LINUX. We might need to change the reference, since I got it all from http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:27, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write down our contact emails and names to write down who would like to write what part.&lt;br /&gt;
&lt;br /&gt;
Another suggestion is that someone should read over the text and compare it to the references posted in the &amp;quot;Sources&amp;quot; section and check if someone is doing plagiarism. &lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider - sebastian@gamersblog.ca&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be &lt;br /&gt;
efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea &lt;br /&gt;
of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was &lt;br /&gt;
the first scheduler which supported SMP.&lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its &lt;br /&gt;
predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a &lt;br /&gt;
scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task &lt;br /&gt;
did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute &lt;br /&gt;
longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and &lt;br /&gt;
did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware &lt;br /&gt;
architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
Linux-2.6 introduced another scheduler up to Linux 2.6.23. Before Linux 2.6.23 an O(1) scheduler was used. It needed the &lt;br /&gt;
same amount of time for each task to execute, independent of how big the tasks were.It kept track of the tasks in a &lt;br /&gt;
running queue. The scheduler offered much more scalability. To determine if a task was I/O bound or processor bound the &lt;br /&gt;
scheduler used interactive metrics with numerous heuristics. Because the code was difficult to manage and the most part of &lt;br /&gt;
the code was to calculate heuristics, it was replaced in Linux 2.6.23 with the CFS scheduler, which is the current &lt;br /&gt;
scheduler in the actual Linux versions.&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining &lt;br /&gt;
fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. &lt;br /&gt;
When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep &lt;br /&gt;
fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual &lt;br /&gt;
runtime.&lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing &lt;br /&gt;
and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. &lt;br /&gt;
Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu &lt;br /&gt;
are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The &lt;br /&gt;
scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted &lt;br /&gt;
into the red-black tree. This means  tasks on the left side are given time to execute, while the contents on the right side &lt;br /&gt;
of the tree are migrated to the left side to maintain fairness. [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:26, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve started writing a bit about the Linux O(1) scheduler:&lt;br /&gt;
&lt;br /&gt;
Under a Linux system, scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19.&lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.[http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt]&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.[http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726]&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 14:39, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;m writing on a contrast of the CFS scheduler right now, please don&#039;t edit it.&lt;br /&gt;
&lt;br /&gt;
In contrast the the O(1) scheduler, CFS realizes the model of a scheduler which can execute precise on real multitasking on real hardware. Precise multitasking means that each process can run at equal speed. If 4 processes are running at the same time, CFS assigns 25% of the CPU time to each process. On real hardware, only one task can be executed at a time and other tasks have to wait, which gives the running tasks an unfair amount of CPU time.&lt;br /&gt;
&lt;br /&gt;
To avoid an unfair balance over the processes, CFS has a wait run-time for each process. CFS tries to pick the process with the highest wait run-time value. To provide a real multitasking, CFS splits up the CPU time between running processes. &lt;br /&gt;
&lt;br /&gt;
Processes are not stored in a run queue, but in a self-balancing red-black tree, where self-balancing means that the task with the highest need for CPU time is stored in the most left node.  Tasks with a lower need for CPU time are stored on the right side of the Tree, where tasks with a higher need for CPU time are stored on the left side.  The task on the left side is picked by the scheduler and given CPU time to run. The tree re-balances itself  and new tasks can be inserted.&lt;br /&gt;
&lt;br /&gt;
CFS is designed in a way that it does not need timeslicing. This is due to the nanosecond granularity, which removes the need for jiffies or other HZ details. [http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt]&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 16:32, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Sources =&lt;br /&gt;
&lt;br /&gt;
[1] http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
[2] http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&lt;br /&gt;
&lt;br /&gt;
[3] http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726&lt;br /&gt;
&lt;br /&gt;
[4] http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3276</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3276"/>
		<updated>2010-10-13T16:39:18Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Sources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a part to introduce the several schedulers for LINUX. We might need to change the reference, since I got it all from http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:27, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write down our contact emails and names to write down who would like to write what part.&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider - sebastian@gamersblog.ca&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be &lt;br /&gt;
efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea &lt;br /&gt;
of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was &lt;br /&gt;
the first scheduler which supported SMP.&lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its &lt;br /&gt;
predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a &lt;br /&gt;
scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task &lt;br /&gt;
did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute &lt;br /&gt;
longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and &lt;br /&gt;
did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware &lt;br /&gt;
architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
Linux-2.6 introduced another scheduler up to Linux 2.6.23. Before Linux 2.6.23 an O(1) scheduler was used. It needed the &lt;br /&gt;
same amount of time for each task to execute, independent of how big the tasks were.It kept track of the tasks in a &lt;br /&gt;
running queue. The scheduler offered much more scalability. To determine if a task was I/O bound or processor bound the &lt;br /&gt;
scheduler used interactive metrics with numerous heuristics. Because the code was difficult to manage and the most part of &lt;br /&gt;
the code was to calculate heuristics, it was replaced in Linux 2.6.23 with the CFS scheduler, which is the current &lt;br /&gt;
scheduler in the actual Linux versions.&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining &lt;br /&gt;
fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. &lt;br /&gt;
When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep &lt;br /&gt;
fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual &lt;br /&gt;
runtime.&lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing &lt;br /&gt;
and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. &lt;br /&gt;
Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu &lt;br /&gt;
are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The &lt;br /&gt;
scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted &lt;br /&gt;
into the red-black tree. This means  tasks on the left side are given time to execute, while the contents on the right side &lt;br /&gt;
of the tree are migrated to the left side to maintain fairness. [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:26, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve started writing a bit about the Linux O(1) scheduler:&lt;br /&gt;
&lt;br /&gt;
Under a Linux system, scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19.&lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.[http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt]&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.[http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726]&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 14:39, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;m writing on a contrast of the CFS scheduler right now, please don&#039;t edit it.&lt;br /&gt;
&lt;br /&gt;
In contrast the the O(1) scheduler, CFS realizes the model of a scheduler which can execute precise on real multitasking on real hardware. Precise multitasking means that each process can run at equal speed. If 4 processes are running at the same time, CFS assigns 25% of the CPU time to each process. On real hardware, only one task can be executed at a time and other tasks have to wait, which gives the running tasks an unfair amount of CPU time.&lt;br /&gt;
&lt;br /&gt;
To avoid an unfair balance over the processes, CFS has a wait run-time for each process. CFS tries to pick the process with the highest wait run-time value. To provide a real multitasking, CFS splits up the CPU time between running processes. &lt;br /&gt;
&lt;br /&gt;
Processes are not stored in a run queue, but in a self-balancing red-black tree, where self-balancing means that the task with the highest need for CPU time is stored in the most left node.  Tasks with a lower need for CPU time are stored on the right side of the Tree, where tasks with a higher need for CPU time are stored on the left side.  The task on the left side is picked by the scheduler and given CPU time to run. The tree re-balances itself  and new tasks can be inserted.&lt;br /&gt;
&lt;br /&gt;
CFS is designed in a way that it does not need timeslicing. This is due to the nanosecond granularity, which removes the need for jiffies or other HZ details. [http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt]&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 16:32, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Sources =&lt;br /&gt;
&lt;br /&gt;
[1] http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
[2] http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&lt;br /&gt;
&lt;br /&gt;
[3] http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726&lt;br /&gt;
&lt;br /&gt;
[4] http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3275</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3275"/>
		<updated>2010-10-13T16:38:42Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Essay Preview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a part to introduce the several schedulers for LINUX. We might need to change the reference, since I got it all from http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:27, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write down our contact emails and names to write down who would like to write what part.&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider - sebastian@gamersblog.ca&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be &lt;br /&gt;
efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea &lt;br /&gt;
of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was &lt;br /&gt;
the first scheduler which supported SMP.&lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its &lt;br /&gt;
predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a &lt;br /&gt;
scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task &lt;br /&gt;
did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute &lt;br /&gt;
longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and &lt;br /&gt;
did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware &lt;br /&gt;
architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
Linux-2.6 introduced another scheduler up to Linux 2.6.23. Before Linux 2.6.23 an O(1) scheduler was used. It needed the &lt;br /&gt;
same amount of time for each task to execute, independent of how big the tasks were.It kept track of the tasks in a &lt;br /&gt;
running queue. The scheduler offered much more scalability. To determine if a task was I/O bound or processor bound the &lt;br /&gt;
scheduler used interactive metrics with numerous heuristics. Because the code was difficult to manage and the most part of &lt;br /&gt;
the code was to calculate heuristics, it was replaced in Linux 2.6.23 with the CFS scheduler, which is the current &lt;br /&gt;
scheduler in the actual Linux versions.&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining &lt;br /&gt;
fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. &lt;br /&gt;
When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep &lt;br /&gt;
fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual &lt;br /&gt;
runtime.&lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing &lt;br /&gt;
and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. &lt;br /&gt;
Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu &lt;br /&gt;
are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The &lt;br /&gt;
scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted &lt;br /&gt;
into the red-black tree. This means  tasks on the left side are given time to execute, while the contents on the right side &lt;br /&gt;
of the tree are migrated to the left side to maintain fairness. [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:26, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve started writing a bit about the Linux O(1) scheduler:&lt;br /&gt;
&lt;br /&gt;
Under a Linux system, scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19.&lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.[http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt]&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.[http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726]&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 14:39, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;m writing on a contrast of the CFS scheduler right now, please don&#039;t edit it.&lt;br /&gt;
&lt;br /&gt;
In contrast the the O(1) scheduler, CFS realizes the model of a scheduler which can execute precise on real multitasking on real hardware. Precise multitasking means that each process can run at equal speed. If 4 processes are running at the same time, CFS assigns 25% of the CPU time to each process. On real hardware, only one task can be executed at a time and other tasks have to wait, which gives the running tasks an unfair amount of CPU time.&lt;br /&gt;
&lt;br /&gt;
To avoid an unfair balance over the processes, CFS has a wait run-time for each process. CFS tries to pick the process with the highest wait run-time value. To provide a real multitasking, CFS splits up the CPU time between running processes. &lt;br /&gt;
&lt;br /&gt;
Processes are not stored in a run queue, but in a self-balancing red-black tree, where self-balancing means that the task with the highest need for CPU time is stored in the most left node.  Tasks with a lower need for CPU time are stored on the right side of the Tree, where tasks with a higher need for CPU time are stored on the left side.  The task on the left side is picked by the scheduler and given CPU time to run. The tree re-balances itself  and new tasks can be inserted.&lt;br /&gt;
&lt;br /&gt;
CFS is designed in a way that it does not need timeslicing. This is due to the nanosecond granularity, which removes the need for jiffies or other HZ details. [http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt]&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 16:32, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Sources =&lt;br /&gt;
&lt;br /&gt;
[1] http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
[2] http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&lt;br /&gt;
&lt;br /&gt;
[3] http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3273</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3273"/>
		<updated>2010-10-13T16:32:51Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Essay Preview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a part to introduce the several schedulers for LINUX. We might need to change the reference, since I got it all from http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:27, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write down our contact emails and names to write down who would like to write what part.&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider - sebastian@gamersblog.ca&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be &lt;br /&gt;
efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea &lt;br /&gt;
of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was &lt;br /&gt;
the first scheduler which supported SMP.&lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its &lt;br /&gt;
predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a &lt;br /&gt;
scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task &lt;br /&gt;
did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute &lt;br /&gt;
longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and &lt;br /&gt;
did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware &lt;br /&gt;
architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
Linux-2.6 introduced another scheduler up to Linux 2.6.23. Before Linux 2.6.23 an O(1) scheduler was used. It needed the &lt;br /&gt;
same amount of time for each task to execute, independent of how big the tasks were.It kept track of the tasks in a &lt;br /&gt;
running queue. The scheduler offered much more scalability. To determine if a task was I/O bound or processor bound the &lt;br /&gt;
scheduler used interactive metrics with numerous heuristics. Because the code was difficult to manage and the most part of &lt;br /&gt;
the code was to calculate heuristics, it was replaced in Linux 2.6.23 with the CFS scheduler, which is the current &lt;br /&gt;
scheduler in the actual Linux versions.&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining &lt;br /&gt;
fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. &lt;br /&gt;
When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep &lt;br /&gt;
fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual &lt;br /&gt;
runtime.&lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing &lt;br /&gt;
and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. &lt;br /&gt;
Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu &lt;br /&gt;
are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The &lt;br /&gt;
scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted &lt;br /&gt;
into the red-black tree. This means  tasks on the left side are given time to execute, while the contents on the right side &lt;br /&gt;
of the tree are migrated to the left side to maintain fairness. [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:26, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve started writing a bit about the Linux O(1) scheduler:&lt;br /&gt;
&lt;br /&gt;
Under a Linux system, scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19.&lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.[http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt]&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.[http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726]&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 14:39, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;m writing on a contrast of the CFS scheduler right now, and will add a first version of the part soon.&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 16:32, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Sources =&lt;br /&gt;
&lt;br /&gt;
[1] http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
[2] http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&lt;br /&gt;
&lt;br /&gt;
[3] http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3244</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3244"/>
		<updated>2010-10-13T14:35:15Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Sources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a part to introduce the several schedulers for LINUX. We might need to change the reference, since I got it all from http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:27, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write down our contact emails and names to write down who would like to write what part.&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider - sebastian@gamersblog.ca&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be &lt;br /&gt;
efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea &lt;br /&gt;
of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was &lt;br /&gt;
the first scheduler which supported SMP.&lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its &lt;br /&gt;
predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a &lt;br /&gt;
scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task &lt;br /&gt;
did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute &lt;br /&gt;
longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and &lt;br /&gt;
did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware &lt;br /&gt;
architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
Linux-2.6 introduced another scheduler up to Linux 2.6.23. Before Linux 2.6.23 an O(1) scheduler was used. It needed the &lt;br /&gt;
same amount of time for each task to execute, independent of how big the tasks were.It kept track of the tasks in a &lt;br /&gt;
running queue. The scheduler offered much more scalability. To determine if a task was I/O bound or processor bound the &lt;br /&gt;
scheduler used interactive metrics with numerous heuristics. Because the code was difficult to manage and the most part of &lt;br /&gt;
the code was to calculate heuristics, it was replaced in Linux 2.6.23 with the CFS scheduler, which is the current &lt;br /&gt;
scheduler in the actual Linux versions.&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining &lt;br /&gt;
fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. &lt;br /&gt;
When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep &lt;br /&gt;
fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual &lt;br /&gt;
runtime.&lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing &lt;br /&gt;
and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. &lt;br /&gt;
Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu &lt;br /&gt;
are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The &lt;br /&gt;
scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted &lt;br /&gt;
into the red-black tree. This means  tasks on the left side are given time to execute, while the contents on the right side &lt;br /&gt;
of the tree are migrated to the left side to maintain fairness. [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:26, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve started writing a bit about the Linux O(1) scheduler:&lt;br /&gt;
&lt;br /&gt;
Under a Linux system, scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19.&lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.[http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt]&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.[http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726]&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 14:39, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
= Sources =&lt;br /&gt;
&lt;br /&gt;
[1] http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
[2] http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&lt;br /&gt;
&lt;br /&gt;
[3] http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3243</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3243"/>
		<updated>2010-10-13T14:34:48Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Essay Preview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a part to introduce the several schedulers for LINUX. We might need to change the reference, since I got it all from http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:27, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write down our contact emails and names to write down who would like to write what part.&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider - sebastian@gamersblog.ca&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be &lt;br /&gt;
efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea &lt;br /&gt;
of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was &lt;br /&gt;
the first scheduler which supported SMP.&lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its &lt;br /&gt;
predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a &lt;br /&gt;
scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task &lt;br /&gt;
did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute &lt;br /&gt;
longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and &lt;br /&gt;
did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware &lt;br /&gt;
architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
Linux-2.6 introduced another scheduler up to Linux 2.6.23. Before Linux 2.6.23 an O(1) scheduler was used. It needed the &lt;br /&gt;
same amount of time for each task to execute, independent of how big the tasks were.It kept track of the tasks in a &lt;br /&gt;
running queue. The scheduler offered much more scalability. To determine if a task was I/O bound or processor bound the &lt;br /&gt;
scheduler used interactive metrics with numerous heuristics. Because the code was difficult to manage and the most part of &lt;br /&gt;
the code was to calculate heuristics, it was replaced in Linux 2.6.23 with the CFS scheduler, which is the current &lt;br /&gt;
scheduler in the actual Linux versions.&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining &lt;br /&gt;
fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. &lt;br /&gt;
When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep &lt;br /&gt;
fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual &lt;br /&gt;
runtime.&lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing &lt;br /&gt;
and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. &lt;br /&gt;
Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu &lt;br /&gt;
are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The &lt;br /&gt;
scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted &lt;br /&gt;
into the red-black tree. This means  tasks on the left side are given time to execute, while the contents on the right side &lt;br /&gt;
of the tree are migrated to the left side to maintain fairness. [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:26, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve started writing a bit about the Linux O(1) scheduler:&lt;br /&gt;
&lt;br /&gt;
Under a Linux system, scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19.&lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.[http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt]&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.[http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726]&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 14:39, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
= Sources =&lt;br /&gt;
&lt;br /&gt;
[1] http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&lt;br /&gt;
&lt;br /&gt;
[2] http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3241</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3241"/>
		<updated>2010-10-13T14:32:47Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Essay Preview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a part to introduce the several schedulers for LINUX. We might need to change the reference, since I got it all from http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:27, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write down our contact emails and names to write down who would like to write what part.&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider - sebastian@gamersblog.ca&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be &lt;br /&gt;
efficient in adding and removing processes.[http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html] When Linux 2.2 was introduced, the scheduler was changed. It now used the idea &lt;br /&gt;
of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was &lt;br /&gt;
the first scheduler which supported SMP.&lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its &lt;br /&gt;
predecessors, but it also has more features.[http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html] The running time was O(n) because it iterated over each task during a &lt;br /&gt;
scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task &lt;br /&gt;
did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute &lt;br /&gt;
longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and &lt;br /&gt;
did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware &lt;br /&gt;
architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
Linux-2.6 introduced another scheduler up to Linux 2.6.23. Before Linux 2.6.23 an O(1) scheduler was used. It needed the &lt;br /&gt;
same amount of time for each task to execute, independent of how big the tasks were.[http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html] It kept track of the tasks in a &lt;br /&gt;
running queue. The scheduler offered much more scalability. To determine if a task was I/O bound or processor bound the &lt;br /&gt;
scheduler used interactive metrics with numerous heuristics. Because the code was difficult to manage and the most part of &lt;br /&gt;
the code was to calculate heuristics, it was replaced in Linux 2.6.23 with the CFS scheduler, which is the current &lt;br /&gt;
scheduler in the actual Linux versions.&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining &lt;br /&gt;
fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. &lt;br /&gt;
When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep &lt;br /&gt;
fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual &lt;br /&gt;
runtime.&lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing &lt;br /&gt;
and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. &lt;br /&gt;
Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu &lt;br /&gt;
are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The &lt;br /&gt;
scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted &lt;br /&gt;
into the red-black tree. This means  tasks on the left side are given time to execute, while the contents on the right side &lt;br /&gt;
of the tree are migrated to the left side to maintain fairness. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:26, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve started writing a bit about the Linux O(1) scheduler:&lt;br /&gt;
&lt;br /&gt;
Under a Linux system, scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19.&lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.[http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt]&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.[http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726]&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 14:39, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
= Sources =&lt;br /&gt;
&lt;br /&gt;
[1] http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&lt;br /&gt;
&lt;br /&gt;
[2] http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3236</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3236"/>
		<updated>2010-10-13T14:15:40Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a part to introduce the several schedulers for LINUX. We might need to change the reference, since I got it all from http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:27, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write down our contact emails and names to write down who would like to write what part.&lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider - sebastian@gamersblog.ca&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2 In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be &lt;br /&gt;
efficient in adding and removing processes.2 When Linux 2.2 was introduced, the scheduler was changed. It now used the idea &lt;br /&gt;
of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was &lt;br /&gt;
the first scheduler which supported SMP.&lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its &lt;br /&gt;
predecessors, but it also has more features.2 The running time was O(n) because it iterated over each task during a &lt;br /&gt;
scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task &lt;br /&gt;
did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute &lt;br /&gt;
longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and &lt;br /&gt;
did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware &lt;br /&gt;
architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
Linux-2.6 introduced another scheduler up to Linux 2.6.23. Before Linux 2.6.23 an O(1) scheduler was used. It needed the &lt;br /&gt;
same amount of time for each task to execute, independent of how big the tasks were.2 It kept track of the tasks in a &lt;br /&gt;
running queue. The scheduler offered much more scalability. To determine if a task was I/O bound or processor bound the &lt;br /&gt;
scheduler used interactive metrics with numerous heuristics. Because the code was difficult to manage and the most part of &lt;br /&gt;
the code was to calculate heuristics, it was replaced in Linux 2.6.23 with the CFS scheduler, which is the current &lt;br /&gt;
scheduler in the actual Linux versions.&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining &lt;br /&gt;
fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. &lt;br /&gt;
When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep &lt;br /&gt;
fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual &lt;br /&gt;
runtime.&lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing &lt;br /&gt;
and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. &lt;br /&gt;
Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu &lt;br /&gt;
are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The &lt;br /&gt;
scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted &lt;br /&gt;
into the red-black tree. This means  tasks on the left side are given time to execute, while the contents on the right side &lt;br /&gt;
of the tree are migrated to the left side to maintain fairness. &lt;br /&gt;
&lt;br /&gt;
2 M. Tim Jones, Consultant Engineer, Emulex&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:26, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve started writing a bit about the Linux O(1) scheduler:&lt;br /&gt;
&lt;br /&gt;
Under a Linux system, scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19.&lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.[http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt]&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.[http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726]&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 14:39, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
= Sources =&lt;br /&gt;
&lt;br /&gt;
[1] http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&lt;br /&gt;
&lt;br /&gt;
[2] http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3235</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3235"/>
		<updated>2010-10-13T14:13:54Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Essay Preview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a part to introduce the several schedulers for LINUX. We might need to change the reference, since I got it all from http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:27, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2 In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be &lt;br /&gt;
efficient in adding and removing processes.2 When Linux 2.2 was introduced, the scheduler was changed. It now used the idea &lt;br /&gt;
of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was &lt;br /&gt;
the first scheduler which supported SMP.&lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its &lt;br /&gt;
predecessors, but it also has more features.2 The running time was O(n) because it iterated over each task during a &lt;br /&gt;
scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task &lt;br /&gt;
did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute &lt;br /&gt;
longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and &lt;br /&gt;
did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware &lt;br /&gt;
architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
Linux-2.6 introduced another scheduler up to Linux 2.6.23. Before Linux 2.6.23 an O(1) scheduler was used. It needed the &lt;br /&gt;
same amount of time for each task to execute, independent of how big the tasks were.2 It kept track of the tasks in a &lt;br /&gt;
running queue. The scheduler offered much more scalability. To determine if a task was I/O bound or processor bound the &lt;br /&gt;
scheduler used interactive metrics with numerous heuristics. Because the code was difficult to manage and the most part of &lt;br /&gt;
the code was to calculate heuristics, it was replaced in Linux 2.6.23 with the CFS scheduler, which is the current &lt;br /&gt;
scheduler in the actual Linux versions.&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining &lt;br /&gt;
fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. &lt;br /&gt;
When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep &lt;br /&gt;
fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual &lt;br /&gt;
runtime.&lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing &lt;br /&gt;
and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. &lt;br /&gt;
Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu &lt;br /&gt;
are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The &lt;br /&gt;
scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted &lt;br /&gt;
into the red-black tree. This means  tasks on the left side are given time to execute, while the contents on the right side &lt;br /&gt;
of the tree are migrated to the left side to maintain fairness. &lt;br /&gt;
&lt;br /&gt;
2 M. Tim Jones, Consultant Engineer, Emulex&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:26, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve started writing a bit about the Linux O(1) scheduler:&lt;br /&gt;
&lt;br /&gt;
Under a Linux system, scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19.&lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.[http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt]&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.[http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726]&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 14:39, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
= Sources =&lt;br /&gt;
&lt;br /&gt;
[1] http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&lt;br /&gt;
&lt;br /&gt;
[2] http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2691</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2691"/>
		<updated>2010-10-09T19:28:34Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Essay Preview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a part to introduce the several schedulers for LINUX. We might need to change the reference, since I got it all from http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:27, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2 In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be &lt;br /&gt;
efficient in adding and removing processes.2 When Linux 2.2 was introduced, the scheduler was changed. It now used the idea &lt;br /&gt;
of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was &lt;br /&gt;
the first scheduler which supported SMP.&lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its &lt;br /&gt;
predecessors, but it also has more features.2 The running time was O(n) because it iterated over each task during a &lt;br /&gt;
scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task &lt;br /&gt;
did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute &lt;br /&gt;
longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and &lt;br /&gt;
did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware &lt;br /&gt;
architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
Linux-2.6 introduced another scheduler up to Linux 2.6.23. Before Linux 2.6.23 an O(1) scheduler was used. It needed the &lt;br /&gt;
same amount of time for each task to execute, independent of how big the tasks were.2 It kept track of the tasks in a &lt;br /&gt;
running queue. The scheduler offered much more scalability. To determine if a task was I/O bound or processor bound the &lt;br /&gt;
scheduler used interactive metrics with numerous heuristics. Because the code was difficult to manage and the most part of &lt;br /&gt;
the code was to calculate heuristics, it was replaced in Linux 2.6.23 with the CFS scheduler, which is the current &lt;br /&gt;
scheduler in the actual Linux versions.&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining &lt;br /&gt;
fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. &lt;br /&gt;
When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep &lt;br /&gt;
fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual &lt;br /&gt;
runtime.&lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing &lt;br /&gt;
and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. &lt;br /&gt;
Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu &lt;br /&gt;
are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The &lt;br /&gt;
scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted &lt;br /&gt;
into the red-black tree. This means  tasks on the left side are given time to execute, while the contents on the right side &lt;br /&gt;
of the tree are migrated to the left side to maintain fairness. &lt;br /&gt;
&lt;br /&gt;
2 M. Tim Jones, Consultant Engineer, Emulex&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:26, 9 October 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2690</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2690"/>
		<updated>2010-10-09T19:27:28Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a part to introduce the several schedulers for LINUX. We might need to change the reference, since I got it all from http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:27, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2 In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be &lt;br /&gt;
efficient in adding and removing processes.2 When Linux 2.2 was introduced, the scheduler was changed. It now used the idea &lt;br /&gt;
of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was &lt;br /&gt;
the first scheduler which supported SMP.&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its &lt;br /&gt;
predecessors, but it also has more features.2 The running time was O(n) because it iterated over each task during a &lt;br /&gt;
scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task &lt;br /&gt;
did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute &lt;br /&gt;
longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and &lt;br /&gt;
did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware &lt;br /&gt;
architectures, such as multi-core processors.&lt;br /&gt;
Linux-2.6 introduced another scheduler up to Linux 2.6.23. Before Linux 2.6.23 an O(1) scheduler was used. It needed the &lt;br /&gt;
same amount of time for each task to execute, independent of how big the tasks were.2 It kept track of the tasks in a &lt;br /&gt;
running queue. The scheduler offered much more scalability. To determine if a task was I/O bound or processor bound the &lt;br /&gt;
scheduler used interactive metrics with numerous heuristics. Because the code was difficult to manage and the most part of &lt;br /&gt;
the code was to calculate heuristics, it was replaced in Linux 2.6.23 with the CFS scheduler, which is the current &lt;br /&gt;
scheduler in the actual Linux versions.&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining &lt;br /&gt;
fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. &lt;br /&gt;
When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep &lt;br /&gt;
fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual &lt;br /&gt;
runtime.&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing &lt;br /&gt;
and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. &lt;br /&gt;
Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu &lt;br /&gt;
are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The &lt;br /&gt;
scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted &lt;br /&gt;
into the red-black tree. This means  tasks on the left side are given time to execute, while the contents on the right side &lt;br /&gt;
of the tree are migrated to the left side to maintain fairness. &lt;br /&gt;
&lt;br /&gt;
2 M. Tim Jones, Consultant Engineer, Emulex&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:26, 9 October 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2688</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2688"/>
		<updated>2010-10-09T19:26:23Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Essay Preview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2 In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be &lt;br /&gt;
efficient in adding and removing processes.2 When Linux 2.2 was introduced, the scheduler was changed. It now used the idea &lt;br /&gt;
of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was &lt;br /&gt;
the first scheduler which supported SMP.&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its &lt;br /&gt;
predecessors, but it also has more features.2 The running time was O(n) because it iterated over each task during a &lt;br /&gt;
scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task &lt;br /&gt;
did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute &lt;br /&gt;
longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and &lt;br /&gt;
did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware &lt;br /&gt;
architectures, such as multi-core processors.&lt;br /&gt;
Linux-2.6 introduced another scheduler up to Linux 2.6.23. Before Linux 2.6.23 an O(1) scheduler was used. It needed the &lt;br /&gt;
same amount of time for each task to execute, independent of how big the tasks were.2 It kept track of the tasks in a &lt;br /&gt;
running queue. The scheduler offered much more scalability. To determine if a task was I/O bound or processor bound the &lt;br /&gt;
scheduler used interactive metrics with numerous heuristics. Because the code was difficult to manage and the most part of &lt;br /&gt;
the code was to calculate heuristics, it was replaced in Linux 2.6.23 with the CFS scheduler, which is the current &lt;br /&gt;
scheduler in the actual Linux versions.&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining &lt;br /&gt;
fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. &lt;br /&gt;
When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep &lt;br /&gt;
fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual &lt;br /&gt;
runtime.&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing &lt;br /&gt;
and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. &lt;br /&gt;
Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu &lt;br /&gt;
are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The &lt;br /&gt;
scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted &lt;br /&gt;
into the red-black tree. This means  tasks on the left side are given time to execute, while the contents on the right side &lt;br /&gt;
of the tree are migrated to the left side to maintain fairness. &lt;br /&gt;
&lt;br /&gt;
2 M. Tim Jones, Consultant Engineer, Emulex&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:26, 9 October 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2395</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2395"/>
		<updated>2010-10-06T14:15:26Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Essay Preview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1  As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2394</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2394"/>
		<updated>2010-10-06T14:14:41Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1  As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2305</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2305"/>
		<updated>2010-10-04T13:38:30Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2294</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2294"/>
		<updated>2010-10-03T16:33:12Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
- Mike&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2293</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2293"/>
		<updated>2010-10-03T16:27:18Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
- Mike&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
- Sebastian&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=2291</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=2291"/>
		<updated>2010-10-02T18:37:58Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
&lt;br /&gt;
I found some resources, which might be useful to answer this question. As far as I know, FreeBSD uses a Multilevel feeback queue and Linux uses in the current version the completly fair scheduler.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Some text about FreeBSD-scheduling http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-ULE Thread Scheduler: http://www.scribd.com/doc/3299978/ULE-Thread-Scheduler-for-FreeBSD&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Completly Fair Scheduler: http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Sebastian&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2290</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2290"/>
		<updated>2010-10-02T17:42:27Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: Created page with &amp;#039;=Discussion=  This is the discussion page for Question 5.&amp;#039;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
This is the discussion page for Question 5.&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=2289</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=2289"/>
		<updated>2010-10-02T17:37:32Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
&lt;br /&gt;
I found some resources, which might be useful to answer this question&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Some text about FreeBSD-scheduling http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-ULE Thread Scheduler: http://www.scribd.com/doc/3299978/ULE-Thread-Scheduler-for-FreeBSD&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Completly Fair Scheduler: http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Sebastian&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Lab_1_2010&amp;diff=2225</id>
		<title>COMP 3000 Lab 1 2010</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Lab_1_2010&amp;diff=2225"/>
		<updated>2010-09-20T18:15:32Z</updated>

		<summary type="html">&lt;p&gt;Sschnei1: /* Part B (Optional) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Part A (Mandatory)==&lt;br /&gt;
&lt;br /&gt;
This part is to be completed in class.&lt;br /&gt;
&lt;br /&gt;
You may add or edit tips after each question; please do not edit the original question, however.&lt;br /&gt;
&lt;br /&gt;
# Create a virtual machine in VirtualBox for Ubuntu Linux and and install Ubuntu using the ISO image in C:\support\somayaji.&lt;br /&gt;
#*Create your disk image on your Desktop or on a USB stick.&lt;br /&gt;
#*Images on your desktop will be deleted when you log out; USB stick images will be slower to access.&lt;br /&gt;
#*USB sticks may need to be formatted with NTFS to support a VirtualBox image. &lt;br /&gt;
#*Create a fixed-sized disk to increase performance.  However, you&#039;ll then have to wait for the fixed-sized disk to be allocated.&lt;br /&gt;
#*If you wish, you can use the Ubuntu install ISO as a live CD image.  In this case you don&#039;t need to install to any virtual hard disks.&lt;br /&gt;
#Create an account on the class wiki.  If you choose not to use your connect username please email me with your real name and your wiki username.&lt;br /&gt;
# Install the guest additions to your Ubuntu guest.  What new capabilities do you get?&lt;br /&gt;
#*To install, first mount the guest additions ISO using the &amp;quot;Devices&amp;quot; VirtualBox menu option.&lt;br /&gt;
#*To run the additions script, start up a Terminal window and run &amp;quot;sudo sh /media/&amp;lt;VBOXADD...&amp;gt;/VBoxLinuxAdditions-x86.run&amp;quot;.&lt;br /&gt;
# How much RAM does the VM use on the host?  How much is available in the VM?&lt;br /&gt;
# Look at the Disk Utility application in Ubuntu.  What sort of storage hardware does it have?  How does this compare to the hardware on the Windows host?&lt;br /&gt;
# Look at /proc/cpuinfo in the Ubuntu guest - is the CPU the same as that reported by Windows?&lt;br /&gt;
# What about the PCI devices as reported by the command line program &amp;quot;lspci&amp;quot;?&lt;br /&gt;
# How does the performance of the VM compare to that of the host OS?  Examine GUI, disk, and network performance.&lt;br /&gt;
&lt;br /&gt;
Feel free to add your tips for the above exercises here.&lt;br /&gt;
&lt;br /&gt;
==Part B (Optional)==&lt;br /&gt;
&lt;br /&gt;
The following exercises are optional.&lt;br /&gt;
&lt;br /&gt;
#Run benchmarks in the guest and host OSs such as lmbench for Linux.&lt;br /&gt;
#*Tip: phoronix-test-suite, rambench, cpuburn, bashmark, forkbomb&lt;br /&gt;
#Enable support for flash and non-free codecs in Ubuntu.&lt;br /&gt;
#Create an Ubuntu virtual machine in VMWare Player.  How does the performance of VMWare and VirtualBox compare?&lt;br /&gt;
#Can you run VirtualBox in the Ubuntu guest?  Note that VirtualBox is part of the Ubuntu distribution already.&lt;br /&gt;
# Setup shared folders between the guest and host and verify that you can copy files both ways.  What does the shared folder look like to Ubuntu?&lt;/div&gt;</summary>
		<author><name>Sschnei1</name></author>
	</entry>
</feed>