<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=AbsMechanik</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=AbsMechanik"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/AbsMechanik"/>
	<updated>2026-05-01T21:00:15Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_11&amp;diff=6886</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_11&amp;diff=6886"/>
		<updated>2010-12-03T07:29:04Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;--[[User:AbsMechanik|AbsMechanik]] 04:50, 3 December 2010 (UTC) I guess I owe it to the lack of sleep, but I finally came across a technical paper which raises a few interesting points for the critique section. Here&#039;s the link: http://queue.acm.org/detail.cfm?id=1773943. Best part, it contains some really good info which is summarized in the paper itself, as its by the same group of people.&lt;br /&gt;
&lt;br /&gt;
--[[User:ScottG|ScottG]] 22:21, 2 December 2010 (UTC) As I mentioned, the info came from, mostly, &#039;Timekeeping in Virtual Machines&#039; (2nd point in the References section). Also, technically speaking direct citations are only ever required when taking quotes from something or using specific numbers, and since I didn&#039;t use either, didn&#039;t see the point in citing. Which seems lazy, but I&#039;ve never lost any marks for a lack of citing on essays before, so figure it&#039;s probably not a big deal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The guest time-keeping section is really good but requires citations. Does someone know where exactly the info came from? - Fedor&lt;br /&gt;
&lt;br /&gt;
Hi, I&#039;m making some cosmetic changes to style, grammar&amp;amp;citation-format.  - Fedor&lt;br /&gt;
&lt;br /&gt;
--[[User:Spanke|Spanke]] 00:19, 2 December 2010 (UTC) Finished Timers, I hate 3004...&lt;br /&gt;
&lt;br /&gt;
--[[User:Jjpwilso|Jjpwilso]] 00:03, 2 December 2010 (UTC) I&#039;ll check for some more references on the ACM and IEEE databases. In the meantime I thought I&#039;d mention what Anil said regarding critique. He suggested we should consider other approaches to the same solution, such as modifying NTP with a different heuristic. I&#039;ll see what I can dig up in other papers on NTP.&lt;br /&gt;
&lt;br /&gt;
--[[User:ScottG|ScottG]] 22:06, 1 December 2010 (UTC) I&#039;m assuming you meant for me to add my references, yes? I really only used the article, and &#039;Timekeeping in Virtual Machines&#039; which I went to add, but is already on there. I&#039;ve looked for other articles to try to get how others have looked at it that aren&#039;t VMware, but there really isn&#039;t a huge amount out there dealing &#039;&#039;specifically&#039;&#039; with guest timekeeping (unless I&#039;ve gone Google-blind, which has admittedly happened before). Mostly I ran into links pointing to that specific article.&lt;br /&gt;
&lt;br /&gt;
--[[User:Sblais2|Sblais2]] 17:50, 1 December 2010 (UTC) I added stuf into the Research problems. I think I summarized most of them. If I forgot any, please add them in. I also added the missing references in the reference section. For Fedor, we seem to miss some content in 2 sections. Also, you could read through the other section and add/change some pertinent information that might&#039;ve been missed that would make this essay even better.&lt;br /&gt;
&lt;br /&gt;
--[[User:Sblais2|Sblais2]] 15:11, 1 December 2010 (UTC) Would it be possible to add your references at the bottom please? Even if it is a link. I have added the article link at the top of the essay.&lt;br /&gt;
&lt;br /&gt;
Hey guys, sorry its taken me a while to post here. If there is a particular topic that needs researching, I could spend some hours doing that tomorrow - suggestions? Also, I intend to fix up the style&amp;amp;structure after everything is done as I am quite good with that. &lt;br /&gt;
&lt;br /&gt;
Fedor&lt;br /&gt;
&lt;br /&gt;
--[[User:ScottG|ScottG]] 21:36, 26 November 2010 (UTC) So I was a little (more than a little) behind on my initially estimated time for getting stuff up on Guest Timekeeping, but that&#039;s the gist of it there now. I&#039;m going to try to buff it up a bit before it&#039;s due, since what I put in is a bit rougher than I&#039;d like. If I seem to be missing something that should be pretty obvious, let me know and I&#039;ll work it in.&lt;br /&gt;
&lt;br /&gt;
--[[User:Jjpwilso|Jjpwilso]] 15:49, 23 November 2010 (UTC) I&#039;ve been completely swamped with COMP3004 stuff (among other things) and feeling guilty as hell about this essay. The good news, for those who might have missed today&#039;s lecture, is we have an extension of one week. Phew!!&lt;br /&gt;
&lt;br /&gt;
--[[User:Sblais2|Sblais2]] 21:29, 22 November 2010 (UTC) I have added a small part to the background section. I have created by hand a diagram explaining how it works. I tried to find an original way of doing it but it is the same diagram everywhere. Please feel free to comment here or by sending me an email.&lt;br /&gt;
&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 19:46, 22 November 2010 (UTC) Here&#039;s what my research has led me to so far. I&#039;m trying to come up with good points for the research problem, contribution and critique part of this essay. Here&#039;s a bunch of links, I&#039;ve come across. I think there will be a few more tonight. Feel free to read through &#039;em: &lt;br /&gt;
&amp;lt;br&amp;gt;http://www.vmware.com/files/pdf/Timekeeping-In-VirtualMachines.pdf&lt;br /&gt;
&amp;lt;br&amp;gt;http://www.xen.org/files/xen_interface.pdf&lt;br /&gt;
&amp;lt;br&amp;gt;http://www.microsoft.com/whdc/system/sysinternals/mm-timer.mspx&lt;br /&gt;
&amp;lt;br&amp;gt;http://www.intel.com/hardwaredesign/hpetspec_1.pdf&lt;br /&gt;
&amp;lt;br&amp;gt;http://www.cubinlab.ee.unimelb.edu.au/radclock/&lt;br /&gt;
&lt;br /&gt;
--[[User:ScottG|ScottG]] 18:55, 22 November 2010 (UTC) I&#039;m good taking the Guest Timekeeping section. Hopefully I&#039;ll have some stuff up tonight or early tomorrow for it.&lt;br /&gt;
&lt;br /&gt;
--[[User:Sblais2|Sblais2]] 17:14, 22 November 2010 (UTC) I will be working on the Background section. I will dedicate it to explain some of the key concepts that are used in the research paper that will allow the readers to have a better understanding on the rest of our essay. The structure you&#039;ve put in place looks good but it might get modified, depending on the text will flow. The diagram is a good idea. I will drawn a simple one and add it in. Feel free again to critique.&lt;br /&gt;
&lt;br /&gt;
--[[User:Jjpwilso|Jjpwilso]] 15:12, 16 November 2010 (UTC) I wanted to get a structure started, so I have stubbed out the first section. Note: some of the sub-sections might belong in the Research Problem section but we can easily move them if they fit there. Let&#039;s use this area to plan who is doing what. Feel free to critique any of my submissions. When you comment here, please put your comments at the very top so we can easily see recent posts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Participants=&lt;br /&gt;
(X) Blais   Sylvain sblais2 - Email: syl20blais@gmail.com&amp;lt;br&amp;gt;&lt;br /&gt;
(X) Graham  Scott   sgraham6&amp;lt;br&amp;gt;&lt;br /&gt;
(X) Ilitchev Fedor  filitche fedor dot ilitchev at gmail dot com &amp;lt;br&amp;gt; &lt;br /&gt;
(X) Panke   Shane   spanke&amp;lt;br&amp;gt;&lt;br /&gt;
(X) Shukla  Abhinav ashukla2&amp;lt;br&amp;gt;&lt;br /&gt;
(X) Wilson  Robert  jjpwilso&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_11&amp;diff=6885</id>
		<title>COMP 3000 Essay 2 2010 Question 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_11&amp;diff=6885"/>
		<updated>2010-12-03T07:27:12Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Virtualize Everything But Time =&lt;br /&gt;
Article written by Timothy Broomhead, Laurence Cremean, Julien Ridoux and Darrel Veitch. They are working for the Center for Ultra-Broadband Information Networks (CUBIN) Department of Electrical &amp;amp; Electronic Engineering at the University of Melbourne in Australia. Here is the link to the article: [http://www.usenix.org/events/osdi10/tech/full_papers/Broomhead.pdf]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
&lt;br /&gt;
The next time you notice one stranger ask another for the time and you see them check their watch, try this experiment: immediately ask too. Chances are the person will check their watch again. Why? Human internal clocks are notoriously unreliable. Our sense of time contracts and expands all day long. We seem to believe that a definitive report of time can only come from some mechanical or electronic source. So social norms require that the watch owner provides you with two things: 1) the time, and 2) a gesture of external authority, i.e. a glance at their watch.&lt;br /&gt;
&lt;br /&gt;
The story of time inside a virtual machine is almost as unreliable as our own internal clocks. How much time has elapsed since a VM client got the CPU&#039;s attention? At the best of times there&#039;s no way for it to guess because it wasn&#039;t actually running the whole time. If the VM was suspended and migrated from one physical host to another its concept of time is even worse. This paper is about how a computer glances at its metaphorical watch, and what kinds of timepieces it has at hand.&lt;br /&gt;
&lt;br /&gt;
To better understand this paper, it is very important to have a good understanding of the general concepts behind it. For example, we all know what clocks are in our day-to-day lives but how are they different in the context of computing? In this section, we will describe concepts like timekeeping, hardware/software clocks, explore the advantages and disadvantages of the different available counters and synchronization algorithms, and explain what a para-virtualized system is about.&lt;br /&gt;
&lt;br /&gt;
===Timekeeping===&lt;br /&gt;
&lt;br /&gt;
Computers typically measure time in one of two ways: tick counting and tickless timekeeping[2]. Tick counting is when the operating system sets up a hardware device, generally a CPU, to interrupt at a certain rate. A counter is updated each time one of these interrupts occurs. It is this counter that allows the system to keep track of the passage of time. &lt;br /&gt;
&lt;br /&gt;
In tickless timekeeping, instead of the OS keeping track of time through interrupts, a hardware device is used instead. This device has its own counter which starts when the system is booted. The OS simply reads the counter from it when needed. Tickless timekeeping seems to be the better way to keep track of time because it doesn’t hog the CPU with hardware interrupts, however its performance is very dependent on the type of hardware used. Another disadvantages is that they tend to drift (see below). But neither of these methods knows the actual time, they only know how long it has been since they last checked an authoritative source. Personal computers typically get their time from a battery-backed real-time clock (i.e. a CMOS clock). Networked machines often need a more precise time, with a resolution in the millisecond range or below. In these cases a machine can query another source such as one based on Network Time Protocol (NTP).&lt;br /&gt;
&lt;br /&gt;
===Clocks===&lt;br /&gt;
&lt;br /&gt;
Computer “clocks” or “timers” can be hardware based, software based or they can even be an hybrid. The most commonly found timer is the hardware timer. All of the hardware timers can be generally described by this diagram where some have either more or less features:&lt;br /&gt;
&lt;br /&gt;
Diagram1. Timer Abstraction&lt;br /&gt;
&lt;br /&gt;
[[File:Timerabstract.jpg]]&lt;br /&gt;
&lt;br /&gt;
This diagram nicely represents how tick counting works. The oscillator runs at a predetermined frequency. The operating system might have to measure it when the system boots. The counter starts with a predetermined value which can be set by software. For every cycle of the oscillator, the counter counts down one unit. When it reaches zero, its generates an output signal that might interrupt the CPU. That same interrupt will then allow the counter’s initial value to be reloaded into the counter and the process begins again. Not all hardware timers work exactly like that. For instance, some actually count up, others don&#039;t use interrupts, and yet others don&#039;t keep an initial counter. The general principle of hardware counters is the however the same. There is some kind of fixed interval at the end of which the current time is updated by an appropriate number of units (i.e. nanoseconds).&lt;br /&gt;
&lt;br /&gt;
===Timers===&lt;br /&gt;
# PIT is useful for generating interrupts at regular intervals through its three channels. Channel 0 is bound to IRQ0 which interrupts the CPU at regular intervals. Channel 1 is specific to each system and Channel 2 is connected to the speaker system. As such, we only need to concern ourselves with Channel 0. [3]&lt;br /&gt;
# CMOS RTC, also known as a CMOS battery, allows the CMOS chip to remain powered to keep track of things like time even while the physical PC unit has no source of power. If there is no CMOS battery on the motherboard, the computer would reset to its default time each restart. The battery itself can die, as expected, if the computer is powered off and not used for a long period of time. This can cause issues with the main OS as well as the VM. [4]&lt;br /&gt;
# Local APIC handles all external interrupts for the processor in the system. It can also accept and generate inter-processor interrupts between Local APICs. [5]&lt;br /&gt;
# ACPI establishes industry-standard interfaces configuration guided by the OS and power management. It is industry-standard through its creators, Intel, Microsoft, Phoenix, Hewlett Packard and Toshiba. Its power management includes all forms: notebooks, desktops, and servers. ACPI&#039;s goal is to improve current power and configuration standards for hardware devices by transitioning to ACPI-compliant hardware. This allows the OS as well as the VM to have control over power management. [9]&lt;br /&gt;
# RDTSC is based on the x86 P5 instruction set and perform high-resolution timing, however, it suffers from several flaws. Discontinuous values from the processor are caused as a result of not using the same thread to the processor each time, which can also be caused by having a multicore processor. This is made worse by ACPI which will eventually lead to the cores being completely out of sync. Availability of dedicated hardware: &amp;quot;RDTSC locks the timing information that the application requests to the processor&#039;s cycle counter.&amp;quot; With dedicated timing devices included on modern motherboards this method of locking the timing information will become obsolete. Lastly, the variability of the CPU&#039;s frequency needs to be taken into account. With modern day laptops, most CPU frequencies are adjusted on the fly to respond to the users demand when needed and to lower themselves when idle, this results in longer battery life and less heat generated by the laptop but regretfully affects RDTSC making it unreliable. [10]&lt;br /&gt;
# HPET defines a set of timers that the OS has access to and can assign to applications. Each timer can generate an interrupt when the least significant bits are equal to the equivalent bits of the 64-bit counter value. However, a race case can occur in which the target time has already passed. This causes more interrupts and more work even if the task is a simple one. It does produce less interrupts than its predecessors PIT and CMOS RTC giving it an edge. Despite its race condition, this modern timer is improvement upon old practices.  [11]&lt;br /&gt;
&lt;br /&gt;
==Guest Timekeeping==&lt;br /&gt;
&lt;br /&gt;
Guest timekeeping is done using the same general methods as any computer timekeeping: either tick counting or tickless systems. Where the two begin to differ, however, is that a host operating system is able to communicate directly with the physical hardware, while the guest operating system is unable to do so, having to communicate to the host system that it wants to communicate with the hardware. Having to do this is the greatest source of the guest operating system&#039;s clock losing accuracy, otherwise known as drifting.&lt;br /&gt;
&lt;br /&gt;
===Sources of Drift===&lt;br /&gt;
&lt;br /&gt;
When a guest operating system is started, its clock simply synchronizes with the host&#039;s – some virtual machines such as VMware also do this when they are resumed from a suspended state, or restored from a snapshot. It is easy to reason that, because the guest&#039;s clock starts off correctly, it will always be correct from then on. Unfortunately, this is not the case. The first source of drift is simply due to electronics. A clock is almost never entirely accurate, having a slight error due to the effects of ambient temperature on oscillator frequency, even on the host system. Since the guest communicates with the host in order to keep track of its time, an error in the host&#039;s time is not only passed on to the guest, but because the host is trying to correct its own time, the guest&#039;s request for a count is given slightly less priority, making it yet again lose accuracy. The larger the drift in the host, the larger the drift in the guest, as the host&#039;s drift simply compounds the issue.&lt;br /&gt;
&lt;br /&gt;
Aside from the host&#039;s own drift, the other cause of drift in the virtual environment is the fact that it is treated like a process by the host. In and of itself this does not seem like a problem, but the guest system can be suspended just before it tries to update its perception of time. With restricted CPU time, it is easy for the lost ticks to pile up and create a backlog. Even if the guest is checking over the network with a time server, its network conversation can be suspended before the answer comes back. When the process resumes the guest has a wildly inaccurate perception of the elapsed time (known as Round-Trip Time or RTT) and its incorrect adjustments for the network delay will throw off the clock. Problems can also come from memory swaps performed by the host. If the virtual environment does not have enough allocated to it by the host it can run into the problem of swapping out pages that are needed soon. Swapping the pages back in will momentarily bring the entire virtual environment to a halt, so ticks are missed and the clock falls behind. &lt;br /&gt;
&lt;br /&gt;
Clearly the errors in virtual machine timekeeping come from algorithms that were simply not designed for virtual environments. They assume a more stable physical world than they have.&lt;br /&gt;
&lt;br /&gt;
===Impact of Drift===&lt;br /&gt;
&lt;br /&gt;
The sources of drift essentially boil down to round-off errors and lost ticks. But the practical impact of drift is quite apparent in any automated system. For a real-world analogy consider a factory&#039;s assembly line, where the machinery is finely tuned to do its own specific part at certain intervals, and generally does this with impressive efficiency. If the clock in the system were to drift, however, a specific machine may move too soon or too late, bringing the line to a potentially catastrophic halt.&lt;br /&gt;
&lt;br /&gt;
In a virtual environment, drift is a bit more subtle, as one result of it could be skewed process scheduling – some schedulers give a certain amount of time to a process before moving on, but if the guest&#039;s time has drifted substantially, when it tries to correct its time it could give more or less time to the processes in the scheduler.&lt;br /&gt;
&lt;br /&gt;
The impacts of drift are even more apparent when virtual machines must communicate with one another. Consider the case of a farm of distributed gaming servers, where differences in timing can mean virtual characters suddenly warp at super-human speed across the landscape. Or for a more serious example consider investment trading, where missing the moment to bid can mean a difference of millions of dollars.&lt;br /&gt;
&lt;br /&gt;
===Compensation Strategies===&lt;br /&gt;
&lt;br /&gt;
There are a number of compensation strategies for dealing with drift, depending on the cause of it. If the problem is due to CPU management issues, then the host can give more CPU time to the virtual machine, or it can lower the timer interrupt rate – or simply use a tickless counter. If it is due to a  memory management issue, allocating more memory to the virtual environment should prevent the system from needing to swap out page files so often.&lt;br /&gt;
&lt;br /&gt;
If the issue is from neither of those, but simply due to the inevitable lag when the guest communicates with the hardware via the host, then there are other methods to correct the drift. Most systems natively have algorithms built in to correct the time if it gets too far ahead or behind real time, though they are not without their own faults; if the time is set ahead when catching up, the backlog of ticks it has built up may not be cleared, so it could potentially set itself ahead multiple times until the backlog is dealt with. Tools built into the virtual machine itself can also deal with drift to an extent, as VMware Tools does. This kind of tool checks to see if the clock&#039;s error is within a certain margin. If it exceeds the margin, then the backlog is set to zero – to prevent the issue mentioned with the native algorithms – and resynchronizes with the host clock before the guest goes back to keeping track of time as it normally would.&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
&lt;br /&gt;
Today, the use of the Network Time Protocol and of daemons like ntpd is the dominant solution for accurate timekeeping. In optimal conditions, the ntpd can be very good but these situations rarely happen. Network congestion, disconnections, lower quality networking hardware and unsuspected system events can create offsets errors in the order of 10‘s or even 100 milliseconds(ms). [6]&lt;br /&gt;
For demanding applications, this is neither robust nor reliable. One way to enhance the performance of ntpd would be to poll from the NTP server more often as this would reduce the offset error. Unfortunately, this would increase the network traffic which could cause network congestions which would raise the offset error. Thus, it would not work.&lt;br /&gt;
&lt;br /&gt;
Another problem with current system software clocks using NTP(like ntpd), is that they provide only an absolute clock.[7] Such clocks are unsuitable for applications that deal with network management and measurements. The reason for this is that NTP focuses on the offset and not on hardware clock&#039;s oscillator rate. For example, when calculating delay variations, the offset error does not change anything in the calculations but the clocks’ oscillator rate variation does affect it. So having a more accurate timestamp would make those calculation more precise. Which mean we would need another system software clock.&lt;br /&gt;
&lt;br /&gt;
In virtualization(in this case Xen), when migrating a running system from one system to another can cause issues and this is again caused by the ntpd daemon. By default, each guest OS runs its own instance of the ntpd daemon. So the synchronization algorithm keeps track of the reference wallclock time, rate-of-drift and current clock error, which are defined by the hardware clock on the system. So when migrating the virtualized OS to another system, the ntpd state is saved but when it is enabled again on the new system problems occur: because no two hardware clocks drift the same way or have the exact same wallclock time, all the information traced by the daemon are all of a sudden inaccurate. This consequences could prove disastrous to the system. These could range anywhere from slowly recoverable errors to ones where the ntpd might never recover and where the virtualized OS could become unstable. &lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
&lt;br /&gt;
The authors of this timekeeping paper have done previous work exploring the feed-forward and feedback mechanisms for clock algorithm adjustment in non-virtual systems [7][8]. The RADclock algorithm (Robust Absolute Difference) was originally explored to address the drift resulting from NTP&#039;s feedback algorithm, and how non-ideal network conditions (a circumstance that is quite common) can have serious effects. In their original paper they improved network synchronization using the TimeStamp Counter (TSC) register a system call introduced in Pentium class machines as a source for a CPU cycle count. The use of a more reliable timestamp and counter provided &amp;quot;GPS-like&amp;quot; reliability in networked environments.&lt;br /&gt;
&lt;br /&gt;
This new paper seeks to take a similar approach in a virtual machine setting where VM migration can cause much more severe disruption than simply lost UDP packets. Rather than use TSC calls (&#039;&#039;rdtsc()&#039;&#039; in [8]) they tried several clock sources, seeking to eliminate variability from power management and CPU load when setting &#039;&#039;raw&#039;&#039; timestamps for use in guest machines.&lt;br /&gt;
&lt;br /&gt;
The paper makes several references to &#039;&#039;feed-forward&#039;&#039; and &#039;&#039;feedback&#039;&#039; mechanisms, and so a quick discussion of these control theories is in order. In feed-forward mechanisms, inputs to a process or calculation may be modified in advance, but the resulting output plays no part in subsequent calculations. In feedback mechanisms (such as NTPd) inputs to a calculation can be modified by outputs from previous ones. As a result, feedback mechanisms carry state from one step to the next. Since the state of a virtual environment may be rendered inaccurate by so many sources a feedback mechanism as a bad idea. The statelessness of feed-forward mechanisms confers advantages in VM migrations since a guest OS can simply discover the actual facts of timestamps rather than try to estimate them from their own invalid state information. The RADclock implementation makes use of this type of feed-forward design.&lt;br /&gt;
&lt;br /&gt;
The mechanism the authors used in the Xen environment was the XenStore, a filesystem structure like &#039;&#039;sysfs&#039;&#039; or &#039;&#039;procfs&#039;&#039; that permits communications between virtual domains using shared memory. Dom0 (Xen terminology for the hypervisor) takes sychronization from its own time server and saves the calculated clock parameters to the shared XenStore. On DomU (Xen for any guest OS) an application would simply use the shared information as a raw timestamp or a time difference. Their extensive testing showed a clear -- if not surprising -- advantage over NTPd.&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
The paper clearly addresses the two key problems with a feedback approach:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Stability of classical control approaches such as PLL and FLL cannot be guaranteed. &amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Difference clocks cannot even be defined in a feedback framework, so we lose their benefits, which include not only much higher accuracy, but also far higher robustness.&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Interestingly enough, a feed-forward approach does not in itself guarantee that the clock never moves backwards (a causality enforcing clock-read function can fix this without compromising the core design). The only drawback with implementing the new timing mechanism is with regards to system compatibility. In most Linux systems, the system clock maintained by the kernel is closely tied to the needs of NTP (ntpd). The kernel API&#039;s support the feedback paradigm only. This implies that a feed-forward based mechanism is simply not compatible with the system. Currently, the RADclock gets around the lack of feed-forward support by providing patches that extend the above mechanisms in FreeBSD and Linux in minimal ways to allow raw counter access to both the kernel and user space. The RADclock API includes difference and absolute clock reading functions based on direct counter timestamping, combined with the latest clock parameters and drift estimate.&lt;br /&gt;
&lt;br /&gt;
Another key point to remember is that synchronization over networks is actually impossible. Even the best timing counters/mechanisms are good based on how they manage an asymmetrical jitter. It would also be interesting if some intelligent heuristics were to be implemented with the existing NTP architecture based around the ideas presented in this paper. It could very well turn out to be a boon in disguise, without the need to adopt new kernel standards to support feed-forward algorithms. &lt;br /&gt;
&lt;br /&gt;
However, it is difficult to critique the authors&#039; work since they did a great job of finding a meaningful set of timestamps and counters and clearly demonstrated an advantage in their field of study. They also compared a number of time sources to ensure that their selection was meaningful and stable. And it&#039;s hard to argue with success. Their results approximate the variation that comes from CPU temperature. That&#039;s quite impressive.&lt;br /&gt;
As a student, one criticism might be that they found quite an obscure way of describing what was at heart a very simple problem. Of course, to be fair to these academics, their paper wasn&#039;t written for students. But the problem can be summed up succinctly: If you can&#039;t trust your own perception of time, ask the closest agent you &#039;&#039;can&#039;&#039; trust -- and make sure they&#039;ve check their watch. There is no closer source to a VM than its host, so find the fastest way there. Or if one has enough money and time on their hands, one can simply switch over to a GPS source or an atomic clock, both of which are better than the RADclock [12].&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
1. Broomhead, Cremean, Ridoux, Veitch. &amp;quot;Virtualize Everything But Time&amp;quot; . 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Broomhead.pdf&lt;br /&gt;
&lt;br /&gt;
2. &amp;quot;Timekeeping in Virtual Machines, Information Guide&amp;quot; from VMWare. Web. http://www.vmware.com/files/pdf/Timekeeping-In-VirtualMachines.pdf Accessed: Nov. 2010. &lt;br /&gt;
&lt;br /&gt;
3. &amp;quot;Bran&#039;s Kernel Development Tutorial&amp;quot; from &#039;&#039;Bona Fide OS Developer&#039;&#039; . Web. http://www.osdever.net/bkerndev/Docs/pit.htm  . Accessed: Nov. 2010. &lt;br /&gt;
&lt;br /&gt;
4. &amp;quot;What is a CMOS battery, and why does my computer need one?&amp;quot;  &#039;&#039;Indiana University&#039;s Knowledge Base&#039;&#039;, 2010. Web. http://kb.iu.edu/data/adoy.html . Accessed: Nov. 2010. &lt;br /&gt;
&lt;br /&gt;
5. &amp;quot;Multiprocessor Specification version 1.4&amp;quot;. &#039;&#039;Intel&#039;&#039;. May 1997. http://developer.intel.com/design/pentium/datashts/24201606.pdf .  &lt;br /&gt;
&lt;br /&gt;
6. Pasztor and Veitch. &amp;quot;PC Based Precision Timing Without GPS&amp;quot;.  2002. http://www.cubinlab.ee.unimelb.edu.au/~darryl/Publications/tscclock_final.pdf .&lt;br /&gt;
&lt;br /&gt;
7. Veitch, Ridoux, Korada. &amp;quot;Robust Synchronization of Absolute and Difference Clocks over Networks&amp;quot;. 2009. http://www.cubinlab.ee.unimelb.edu.au/~darryl/Publications/synch_ToN.pdf&lt;br /&gt;
&lt;br /&gt;
8. Broomhead, T.; Ridoux, J.; Veitch, D.; , &amp;quot;Counter availability and characteristics for feed-forward based synchronization,&amp;quot; Precision Clock Synchronization for Measurement, Control and Communication, 2009. ISPCS 2009. International Symposium on , vol., no., pp.1-6, 12-16 Oct. 2009&lt;br /&gt;
&lt;br /&gt;
9. &amp;quot;Advanced Configuration and Power Interface Specification&amp;quot;. &#039;&#039;Intel Corporation&#039;&#039;. April 2010. http://www.acpi.info/DOWNLOADS/ACPIspec40a.pdf. &lt;br /&gt;
&lt;br /&gt;
10. &amp;quot;Game timing and Multicore Processors&amp;quot;. &#039;&#039;msdn&#039;&#039; . Dec 2005. Web. http://msdn.microsoft.com/en-us/library/ee417693%28VS.85%29.aspx . Accessed Nov. 2010.&lt;br /&gt;
&lt;br /&gt;
11. &amp;quot;IA-PC HPET (High Precision Event Timers) Specification. &#039;&#039;Intel Corporation&#039;&#039;. Oct. 2004. http://hackipedia.org/Hardware/HPET,%20High%20Performance%20Event%20Timer/IA-PC%20HPET%20%28High%20Precision%20Event%20Timers%29%20Specification.pdf .&lt;br /&gt;
&lt;br /&gt;
12. Julien Ridoux, Darryl Veitch. &amp;quot;Principles of Robust Timing over the Internet&amp;quot;. April 21, 2010. ACM/Web. http://delivery.acm.org/10.1145/1780000/1773943/p30-veitch.pdf?key1=1773943&amp;amp;key2=1374631921&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;CFID=116933280&amp;amp;CFTOKEN=74447587. Accessed: Dec. 2010.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_11&amp;diff=6884</id>
		<title>COMP 3000 Essay 2 2010 Question 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_11&amp;diff=6884"/>
		<updated>2010-12-03T07:24:57Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Virtualize Everything But Time =&lt;br /&gt;
Article written by Timothy Broomhead, Laurence Cremean, Julien Ridoux and Darrel Veitch. They are working for the Center for Ultra-Broadband Information Networks (CUBIN) Department of Electrical &amp;amp; Electronic Engineering at the University of Melbourne in Australia. Here is the link to the article: [http://www.usenix.org/events/osdi10/tech/full_papers/Broomhead.pdf]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
&lt;br /&gt;
The next time you notice one stranger ask another for the time and you see them check their watch, try this experiment: immediately ask too. Chances are the person will check their watch again. Why? Human internal clocks are notoriously unreliable. Our sense of time contracts and expands all day long. We seem to believe that a definitive report of time can only come from some mechanical or electronic source. So social norms require that the watch owner provides you with two things: 1) the time, and 2) a gesture of external authority, i.e. a glance at their watch.&lt;br /&gt;
&lt;br /&gt;
The story of time inside a virtual machine is almost as unreliable as our own internal clocks. How much time has elapsed since a VM client got the CPU&#039;s attention? At the best of times there&#039;s no way for it to guess because it wasn&#039;t actually running the whole time. If the VM was suspended and migrated from one physical host to another its concept of time is even worse. This paper is about how a computer glances at its metaphorical watch, and what kinds of timepieces it has at hand.&lt;br /&gt;
&lt;br /&gt;
To better understand this paper, it is very important to have a good understanding of the general concepts behind it. For example, we all know what clocks are in our day-to-day lives but how are they different in the context of computing? In this section, we will describe concepts like timekeeping, hardware/software clocks, explore the advantages and disadvantages of the different available counters and synchronization algorithms, and explain what a para-virtualized system is about.&lt;br /&gt;
&lt;br /&gt;
===Timekeeping===&lt;br /&gt;
&lt;br /&gt;
Computers typically measure time in one of two ways: tick counting and tickless timekeeping[2]. Tick counting is when the operating system sets up a hardware device, generally a CPU, to interrupt at a certain rate. A counter is updated each time one of these interrupts occurs. It is this counter that allows the system to keep track of the passage of time. &lt;br /&gt;
&lt;br /&gt;
In tickless timekeeping, instead of the OS keeping track of time through interrupts, a hardware device is used instead. This device has its own counter which starts when the system is booted. The OS simply reads the counter from it when needed. Tickless timekeeping seems to be the better way to keep track of time because it doesn’t hog the CPU with hardware interrupts, however its performance is very dependent on the type of hardware used. Another disadvantages is that they tend to drift (see below). But neither of these methods knows the actual time, they only know how long it has been since they last checked an authoritative source. Personal computers typically get their time from a battery-backed real-time clock (i.e. a CMOS clock). Networked machines often need a more precise time, with a resolution in the millisecond range or below. In these cases a machine can query another source such as one based on Network Time Protocol (NTP).&lt;br /&gt;
&lt;br /&gt;
===Clocks===&lt;br /&gt;
&lt;br /&gt;
Computer “clocks” or “timers” can be hardware based, software based or they can even be an hybrid. The most commonly found timer is the hardware timer. All of the hardware timers can be generally described by this diagram where some have either more or less features:&lt;br /&gt;
&lt;br /&gt;
Diagram1. Timer Abstraction&lt;br /&gt;
&lt;br /&gt;
[[File:Timerabstract.jpg]]&lt;br /&gt;
&lt;br /&gt;
This diagram nicely represents how tick counting works. The oscillator runs at a predetermined frequency. The operating system might have to measure it when the system boots. The counter starts with a predetermined value which can be set by software. For every cycle of the oscillator, the counter counts down one unit. When it reaches zero, its generates an output signal that might interrupt the CPU. That same interrupt will then allow the counter’s initial value to be reloaded into the counter and the process begins again. Not all hardware timers work exactly like that. For instance, some actually count up, others don&#039;t use interrupts, and yet others don&#039;t keep an initial counter. The general principle of hardware counters is the however the same. There is some kind of fixed interval at the end of which the current time is updated by an appropriate number of units (i.e. nanoseconds).&lt;br /&gt;
&lt;br /&gt;
===Timers===&lt;br /&gt;
# PIT is useful for generating interrupts at regular intervals through its three channels. Channel 0 is bound to IRQ0 which interrupts the CPU at regular intervals. Channel 1 is specific to each system and Channel 2 is connected to the speaker system. As such, we only need to concern ourselves with Channel 0. [3]&lt;br /&gt;
# CMOS RTC, also known as a CMOS battery, allows the CMOS chip to remain powered to keep track of things like time even while the physical PC unit has no source of power. If there is no CMOS battery on the motherboard, the computer would reset to its default time each restart. The battery itself can die, as expected, if the computer is powered off and not used for a long period of time. This can cause issues with the main OS as well as the VM. [4]&lt;br /&gt;
# Local APIC handles all external interrupts for the processor in the system. It can also accept and generate inter-processor interrupts between Local APICs. [5]&lt;br /&gt;
# ACPI establishes industry-standard interfaces configuration guided by the OS and power management. It is industry-standard through its creators, Intel, Microsoft, Phoenix, Hewlett Packard and Toshiba. Its power management includes all forms: notebooks, desktops, and servers. ACPI&#039;s goal is to improve current power and configuration standards for hardware devices by transitioning to ACPI-compliant hardware. This allows the OS as well as the VM to have control over power management. [9]&lt;br /&gt;
# RDTSC is based on the x86 P5 instruction set and perform high-resolution timing, however, it suffers from several flaws. Discontinuous values from the processor are caused as a result of not using the same thread to the processor each time, which can also be caused by having a multicore processor. This is made worse by ACPI which will eventually lead to the cores being completely out of sync. Availability of dedicated hardware: &amp;quot;RDTSC locks the timing information that the application requests to the processor&#039;s cycle counter.&amp;quot; With dedicated timing devices included on modern motherboards this method of locking the timing information will become obsolete. Lastly, the variability of the CPU&#039;s frequency needs to be taken into account. With modern day laptops, most CPU frequencies are adjusted on the fly to respond to the users demand when needed and to lower themselves when idle, this results in longer battery life and less heat generated by the laptop but regretfully affects RDTSC making it unreliable. [10]&lt;br /&gt;
# HPET defines a set of timers that the OS has access to and can assign to applications. Each timer can generate an interrupt when the least significant bits are equal to the equivalent bits of the 64-bit counter value. However, a race case can occur in which the target time has already passed. This causes more interrupts and more work even if the task is a simple one. It does produce less interrupts than its predecessors PIT and CMOS RTC giving it an edge. Despite its race condition, this modern timer is improvement upon old practices.  [11]&lt;br /&gt;
&lt;br /&gt;
==Guest Timekeeping==&lt;br /&gt;
&lt;br /&gt;
Guest timekeeping is done using the same general methods as any computer timekeeping: either tick counting or tickless systems. Where the two begin to differ, however, is that a host operating system is able to communicate directly with the physical hardware, while the guest operating system is unable to do so, having to communicate to the host system that it wants to communicate with the hardware. Having to do this is the greatest source of the guest operating system&#039;s clock losing accuracy, otherwise known as drifting.&lt;br /&gt;
&lt;br /&gt;
===Sources of Drift===&lt;br /&gt;
&lt;br /&gt;
When a guest operating system is started, its clock simply synchronizes with the host&#039;s – some virtual machines such as VMware also do this when they are resumed from a suspended state, or restored from a snapshot. It is easy to reason that, because the guest&#039;s clock starts off correctly, it will always be correct from then on. Unfortunately, this is not the case. The first source of drift is simply due to electronics. A clock is almost never entirely accurate, having a slight error due to the effects of ambient temperature on oscillator frequency, even on the host system. Since the guest communicates with the host in order to keep track of its time, an error in the host&#039;s time is not only passed on to the guest, but because the host is trying to correct its own time, the guest&#039;s request for a count is given slightly less priority, making it yet again lose accuracy. The larger the drift in the host, the larger the drift in the guest, as the host&#039;s drift simply compounds the issue.&lt;br /&gt;
&lt;br /&gt;
Aside from the host&#039;s own drift, the other cause of drift in the virtual environment is the fact that it is treated like a process by the host. In and of itself this does not seem like a problem, but the guest system can be suspended just before it tries to update its perception of time. With restricted CPU time, it is easy for the lost ticks to pile up and create a backlog. Even if the guest is checking over the network with a time server, its network conversation can be suspended before the answer comes back. When the process resumes the guest has a wildly inaccurate perception of the elapsed time (known as Round-Trip Time or RTT) and its incorrect adjustments for the network delay will throw off the clock. Problems can also come from memory swaps performed by the host. If the virtual environment does not have enough allocated to it by the host it can run into the problem of swapping out pages that are needed soon. Swapping the pages back in will momentarily bring the entire virtual environment to a halt, so ticks are missed and the clock falls behind. &lt;br /&gt;
&lt;br /&gt;
Clearly the errors in virtual machine timekeeping come from algorithms that were simply not designed for virtual environments. They assume a more stable physical world than they have.&lt;br /&gt;
&lt;br /&gt;
===Impact of Drift===&lt;br /&gt;
&lt;br /&gt;
The sources of drift essentially boil down to round-off errors and lost ticks. But the practical impact of drift is quite apparent in any automated system. For a real-world analogy consider a factory&#039;s assembly line, where the machinery is finely tuned to do its own specific part at certain intervals, and generally does this with impressive efficiency. If the clock in the system were to drift, however, a specific machine may move too soon or too late, bringing the line to a potentially catastrophic halt.&lt;br /&gt;
&lt;br /&gt;
In a virtual environment, drift is a bit more subtle, as one result of it could be skewed process scheduling – some schedulers give a certain amount of time to a process before moving on, but if the guest&#039;s time has drifted substantially, when it tries to correct its time it could give more or less time to the processes in the scheduler.&lt;br /&gt;
&lt;br /&gt;
The impacts of drift are even more apparent when virtual machines must communicate with one another. Consider the case of a farm of distributed gaming servers, where differences in timing can mean virtual characters suddenly warp at super-human speed across the landscape. Or for a more serious example consider investment trading, where missing the moment to bid can mean a difference of millions of dollars.&lt;br /&gt;
&lt;br /&gt;
===Compensation Strategies===&lt;br /&gt;
&lt;br /&gt;
There are a number of compensation strategies for dealing with drift, depending on the cause of it. If the problem is due to CPU management issues, then the host can give more CPU time to the virtual machine, or it can lower the timer interrupt rate – or simply use a tickless counter. If it is due to a  memory management issue, allocating more memory to the virtual environment should prevent the system from needing to swap out page files so often.&lt;br /&gt;
&lt;br /&gt;
If the issue is from neither of those, but simply due to the inevitable lag when the guest communicates with the hardware via the host, then there are other methods to correct the drift. Most systems natively have algorithms built in to correct the time if it gets too far ahead or behind real time, though they are not without their own faults; if the time is set ahead when catching up, the backlog of ticks it has built up may not be cleared, so it could potentially set itself ahead multiple times until the backlog is dealt with. Tools built into the virtual machine itself can also deal with drift to an extent, as VMware Tools does. This kind of tool checks to see if the clock&#039;s error is within a certain margin. If it exceeds the margin, then the backlog is set to zero – to prevent the issue mentioned with the native algorithms – and resynchronizes with the host clock before the guest goes back to keeping track of time as it normally would.&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
&lt;br /&gt;
Today, the use of the Network Time Protocol and of daemons like ntpd is the dominant solution for accurate timekeeping. In optimal conditions, the ntpd can be very good but these situations rarely happen. Network congestion, disconnections, lower quality networking hardware and unsuspected system events can create offsets errors in the order of 10‘s or even 100 milliseconds(ms). [6]&lt;br /&gt;
For demanding applications, this is neither robust nor reliable. One way to enhance the performance of ntpd would be to poll from the NTP server more often as this would reduce the offset error. Unfortunately, this would increase the network traffic which could cause network congestions which would raise the offset error. Thus, it would not work.&lt;br /&gt;
&lt;br /&gt;
Another problem with current system software clocks using NTP(like ntpd), is that they provide only an absolute clock.[7] Such clocks are unsuitable for applications that deal with network management and measurements. The reason for this is that NTP focuses on the offset and not on hardware clock&#039;s oscillator rate. For example, when calculating delay variations, the offset error does not change anything in the calculations but the clocks’ oscillator rate variation does affect it. So having a more accurate timestamp would make those calculation more precise. Which mean we would need another system software clock.&lt;br /&gt;
&lt;br /&gt;
In virtualization(in this case Xen), when migrating a running system from one system to another can cause issues and this is again caused by the ntpd daemon. By default, each guest OS runs its own instance of the ntpd daemon. So the synchronization algorithm keeps track of the reference wallclock time, rate-of-drift and current clock error, which are defined by the hardware clock on the system. So when migrating the virtualized OS to another system, the ntpd state is saved but when it is enabled again on the new system problems occur: because no two hardware clocks drift the same way or have the exact same wallclock time, all the information traced by the daemon are all of a sudden inaccurate. This consequences could prove disastrous to the system. These could range anywhere from slowly recoverable errors to ones where the ntpd might never recover and where the virtualized OS could become unstable. &lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
&lt;br /&gt;
The authors of this timekeeping paper have done previous work exploring the feed-forward and feedback mechanisms for clock algorithm adjustment in non-virtual systems [7][8]. The RADclock algorithm (Robust Absolute Difference) was originally explored to address the drift resulting from NTP&#039;s feedback algorithm, and how non-ideal network conditions (a circumstance that is quite common) can have serious effects. In their original paper they improved network synchronization using the TimeStamp Counter (TSC) register a system call introduced in Pentium class machines as a source for a CPU cycle count. The use of a more reliable timestamp and counter provided &amp;quot;GPS-like&amp;quot; reliability in networked environments.&lt;br /&gt;
&lt;br /&gt;
This new paper seeks to take a similar approach in a virtual machine setting where VM migration can cause much more severe disruption than simply lost UDP packets. Rather than use TSC calls (&#039;&#039;rdtsc()&#039;&#039; in [8]) they tried several clock sources, seeking to eliminate variability from power management and CPU load when setting &#039;&#039;raw&#039;&#039; timestamps for use in guest machines.&lt;br /&gt;
&lt;br /&gt;
The paper makes several references to &#039;&#039;feed-forward&#039;&#039; and &#039;&#039;feedback&#039;&#039; mechanisms, and so a quick discussion of these control theories is in order. In feed-forward mechanisms, inputs to a process or calculation may be modified in advance, but the resulting output plays no part in subsequent calculations. In feedback mechanisms (such as NTPd) inputs to a calculation can be modified by outputs from previous ones. As a result, feedback mechanisms carry state from one step to the next. Since the state of a virtual environment may be rendered inaccurate by so many sources a feedback mechanism as a bad idea. The statelessness of feed-forward mechanisms confers advantages in VM migrations since a guest OS can simply discover the actual facts of timestamps rather than try to estimate them from their own invalid state information. The RADclock implementation makes use of this type of feed-forward design.&lt;br /&gt;
&lt;br /&gt;
The mechanism the authors used in the Xen environment was the XenStore, a filesystem structure like &#039;&#039;sysfs&#039;&#039; or &#039;&#039;procfs&#039;&#039; that permits communications between virtual domains using shared memory. Dom0 (Xen terminology for the hypervisor) takes sychronization from its own time server and saves the calculated clock parameters to the shared XenStore. On DomU (Xen for any guest OS) an application would simply use the shared information as a raw timestamp or a time difference. Their extensive testing showed a clear -- if not surprising -- advantage over NTPd.&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
The paper clearly addresses the two key problems with a feedback approach:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Stability of classical control approaches such as PLL and FLL cannot be guaranteed. &amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Difference clocks cannot even be defined in a feedback framework, so we lose their benefits, which include not only much higher accuracy, but also far higher robustness.&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Interestingly enough, a feed-forward approach does not in itself guarantee that the clock never moves backwards (a causality enforcing clock-read function can fix this without compromising the core design). The only drawback with implementing the new timing mechanism is with regards to system compatibility. In most Linux systems, the system clock maintained by the kernel is closely tied to the needs of NTP (ntpd). The kernel API&#039;s support the feedback paradigm only. This implies that a feed-forward based mechanism is simply not compatible with the system. Currently, the RADclock gets around the lack of feed-forward support by providing patches that extend the above mechanisms in FreeBSD and Linux in minimal ways to allow raw counter access to both the kernel and user space. The RADclock API includes difference and absolute clock reading functions based on direct counter timestamping, combined with the latest clock parameters and drift estimate.&lt;br /&gt;
&lt;br /&gt;
Another key point to remember is that synchronization over networks is actually impossible. Even the best timing counters/mechanisms are good based on how they manage an asymmetrical jitter. It would also be interesting if some intelligent heuristics were to be implemented with the existing NTP architecture based around the ideas presented in this paper. It could very well turn out to be a boon in disguise, without the need to adopt new kernel standards to support feed-forward algorithms. &lt;br /&gt;
&lt;br /&gt;
However, it is difficult to critique the authors&#039; work since they did a great job of finding a meaningful set of timestamps and counters and clearly demonstrated an advantage in their field of study. They also compared a number of time sources to ensure that their selection was meaningful and stable. And it&#039;s hard to argue with success. Their results approximate the variation that comes from CPU temperature. That&#039;s quite impressive.&lt;br /&gt;
As a student, one criticism might be that they found quite an obscure way of describing what was at heart a very simple problem. Of course, to be fair to these academics, their paper wasn&#039;t written for students. But the problem can be summed up succinctly: If you can&#039;t trust your own perception of time, ask the closest agent you &#039;&#039;can&#039;&#039; trust -- and make sure they&#039;ve check their watch. There is no closer source to a VM than its host, so find the fastest way there. Or if one has enough money and time on their hands, one can simply switch over to a GPS source or an atomic clock, both of which are better than the RADclock [12].&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
1. Broomhead, Cremean, Ridoux, Veitch. &amp;quot;Virtualize Everything But Time&amp;quot; . 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Broomhead.pdf&lt;br /&gt;
&lt;br /&gt;
2. &amp;quot;Timekeeping in Virtual Machines, Information Guide&amp;quot; from VMWare. Web. http://www.vmware.com/files/pdf/Timekeeping-In-VirtualMachines.pdf Accessed: Nov. 2010. &lt;br /&gt;
&lt;br /&gt;
3. &amp;quot;Bran&#039;s Kernel Development Tutorial&amp;quot; from &#039;&#039;Bona Fide OS Developer&#039;&#039; . Web. http://www.osdever.net/bkerndev/Docs/pit.htm  . Accessed: Nov. 2010. &lt;br /&gt;
&lt;br /&gt;
4. &amp;quot;What is a CMOS battery, and why does my computer need one?&amp;quot;  &#039;&#039;Indiana University&#039;s Knowledge Base&#039;&#039;, 2010. Web. http://kb.iu.edu/data/adoy.html . Accessed: Nov. 2010. &lt;br /&gt;
&lt;br /&gt;
5. &amp;quot;Multiprocessor Specification version 1.4&amp;quot;. &#039;&#039;Intel&#039;&#039;. May 1997. http://developer.intel.com/design/pentium/datashts/24201606.pdf .  &lt;br /&gt;
&lt;br /&gt;
6. Pasztor and Veitch. &amp;quot;PC Based Precision Timing Without GPS&amp;quot;.  2002. http://www.cubinlab.ee.unimelb.edu.au/~darryl/Publications/tscclock_final.pdf .&lt;br /&gt;
&lt;br /&gt;
7. Veitch, Ridoux, Korada. &amp;quot;Robust Synchronization of Absolute and Difference Clocks over Networks&amp;quot;. 2009. http://www.cubinlab.ee.unimelb.edu.au/~darryl/Publications/synch_ToN.pdf&lt;br /&gt;
&lt;br /&gt;
8. Broomhead, T.; Ridoux, J.; Veitch, D.; , &amp;quot;Counter availability and characteristics for feed-forward based synchronization,&amp;quot; Precision Clock Synchronization for Measurement, Control and Communication, 2009. ISPCS 2009. International Symposium on , vol., no., pp.1-6, 12-16 Oct. 2009&lt;br /&gt;
&lt;br /&gt;
9. &amp;quot;Advanced Configuration and Power Interface Specification&amp;quot;. &#039;&#039;Intel Corporation&#039;&#039;. April 2010. http://www.acpi.info/DOWNLOADS/ACPIspec40a.pdf. &lt;br /&gt;
&lt;br /&gt;
10. &amp;quot;Game timing and Multicore Processors&amp;quot;. &#039;&#039;msdn&#039;&#039; . Dec 2005. Web. http://msdn.microsoft.com/en-us/library/ee417693%28VS.85%29.aspx . Accessed Nov. 2010.&lt;br /&gt;
&lt;br /&gt;
11. &amp;quot;IA-PC HPET (High Precision Event Timers) Specification. &#039;&#039;Intel Corporation&#039;&#039;. Oct. 2004. http://hackipedia.org/Hardware/HPET,%20High%20Performance%20Event%20Timer/IA-PC%20HPET%20%28High%20Precision%20Event%20Timers%29%20Specification.pdf .&lt;br /&gt;
&lt;br /&gt;
12. Julien Ridoux, Darryl Veitch. &amp;quot;Principles of Robust Timing over the Internet&amp;quot;. April 21, 2010. ACMQueue. http://queue.acm.org/detail.cfm?id=1773943. Accessed: Dec. 2010.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_11&amp;diff=6883</id>
		<title>COMP 3000 Essay 2 2010 Question 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_11&amp;diff=6883"/>
		<updated>2010-12-03T07:17:50Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Critique */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Virtualize Everything But Time =&lt;br /&gt;
Article written by Timothy Broomhead, Laurence Cremean, Julien Ridoux and Darrel Veitch. They are working for the Center for Ultra-Broadband Information Networks (CUBIN) Department of Electrical &amp;amp; Electronic Engineering at the University of Melbourne in Australia. Here is the link to the article: [http://www.usenix.org/events/osdi10/tech/full_papers/Broomhead.pdf]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
&lt;br /&gt;
The next time you notice one stranger ask another for the time and you see them check their watch, try this experiment: immediately ask too. Chances are the person will check their watch again. Why? Human internal clocks are notoriously unreliable. Our sense of time contracts and expands all day long. We seem to believe that a definitive report of time can only come from some mechanical or electronic source. So social norms require that the watch owner provides you with two things: 1) the time, and 2) a gesture of external authority, i.e. a glance at their watch.&lt;br /&gt;
&lt;br /&gt;
The story of time inside a virtual machine is almost as unreliable as our own internal clocks. How much time has elapsed since a VM client got the CPU&#039;s attention? At the best of times there&#039;s no way for it to guess because it wasn&#039;t actually running the whole time. If the VM was suspended and migrated from one physical host to another its concept of time is even worse. This paper is about how a computer glances at its metaphorical watch, and what kinds of timepieces it has at hand.&lt;br /&gt;
&lt;br /&gt;
To better understand this paper, it is very important to have a good understanding of the general concepts behind it. For example, we all know what clocks are in our day-to-day lives but how are they different in the context of computing? In this section, we will describe concepts like timekeeping, hardware/software clocks, explore the advantages and disadvantages of the different available counters and synchronization algorithms, and explain what a para-virtualized system is about.&lt;br /&gt;
&lt;br /&gt;
===Timekeeping===&lt;br /&gt;
&lt;br /&gt;
Computers typically measure time in one of two ways: tick counting and tickless timekeeping[2]. Tick counting is when the operating system sets up a hardware device, generally a CPU, to interrupt at a certain rate. A counter is updated each time one of these interrupts occurs. It is this counter that allows the system to keep track of the passage of time. &lt;br /&gt;
&lt;br /&gt;
In tickless timekeeping, instead of the OS keeping track of time through interrupts, a hardware device is used instead. This device has its own counter which starts when the system is booted. The OS simply reads the counter from it when needed. Tickless timekeeping seems to be the better way to keep track of time because it doesn’t hog the CPU with hardware interrupts, however its performance is very dependent on the type of hardware used. Another disadvantages is that they tend to drift (see below). But neither of these methods knows the actual time, they only know how long it has been since they last checked an authoritative source. Personal computers typically get their time from a battery-backed real-time clock (i.e. a CMOS clock). Networked machines often need a more precise time, with a resolution in the millisecond range or below. In these cases a machine can query another source such as one based on Network Time Protocol (NTP).&lt;br /&gt;
&lt;br /&gt;
===Clocks===&lt;br /&gt;
&lt;br /&gt;
Computer “clocks” or “timers” can be hardware based, software based or they can even be an hybrid. The most commonly found timer is the hardware timer. All of the hardware timers can be generally described by this diagram where some have either more or less features:&lt;br /&gt;
&lt;br /&gt;
Diagram1. Timer Abstraction&lt;br /&gt;
&lt;br /&gt;
[[File:Timerabstract.jpg]]&lt;br /&gt;
&lt;br /&gt;
This diagram nicely represents how tick counting works. The oscillator runs at a predetermined frequency. The operating system might have to measure it when the system boots. The counter starts with a predetermined value which can be set by software. For every cycle of the oscillator, the counter counts down one unit. When it reaches zero, its generates an output signal that might interrupt the CPU. That same interrupt will then allow the counter’s initial value to be reloaded into the counter and the process begins again. Not all hardware timers work exactly like that. For instance, some actually count up, others don&#039;t use interrupts, and yet others don&#039;t keep an initial counter. The general principle of hardware counters is the however the same. There is some kind of fixed interval at the end of which the current time is updated by an appropriate number of units (i.e. nanoseconds).&lt;br /&gt;
&lt;br /&gt;
===Timers===&lt;br /&gt;
# PIT is useful for generating interrupts at regular intervals through its three channels. Channel 0 is bound to IRQ0 which interrupts the CPU at regular intervals. Channel 1 is specific to each system and Channel 2 is connected to the speaker system. As such, we only need to concern ourselves with Channel 0. [3]&lt;br /&gt;
# CMOS RTC, also known as a CMOS battery, allows the CMOS chip to remain powered to keep track of things like time even while the physical PC unit has no source of power. If there is no CMOS battery on the motherboard, the computer would reset to its default time each restart. The battery itself can die, as expected, if the computer is powered off and not used for a long period of time. This can cause issues with the main OS as well as the VM. [4]&lt;br /&gt;
# Local APIC handles all external interrupts for the processor in the system. It can also accept and generate inter-processor interrupts between Local APICs. [5]&lt;br /&gt;
# ACPI establishes industry-standard interfaces configuration guided by the OS and power management. It is industry-standard through its creators, Intel, Microsoft, Phoenix, Hewlett Packard and Toshiba. Its power management includes all forms: notebooks, desktops, and servers. ACPI&#039;s goal is to improve current power and configuration standards for hardware devices by transitioning to ACPI-compliant hardware. This allows the OS as well as the VM to have control over power management. [9]&lt;br /&gt;
# RDTSC is based on the x86 P5 instruction set and perform high-resolution timing, however, it suffers from several flaws. Discontinuous values from the processor are caused as a result of not using the same thread to the processor each time, which can also be caused by having a multicore processor. This is made worse by ACPI which will eventually lead to the cores being completely out of sync. Availability of dedicated hardware: &amp;quot;RDTSC locks the timing information that the application requests to the processor&#039;s cycle counter.&amp;quot; With dedicated timing devices included on modern motherboards this method of locking the timing information will become obsolete. Lastly, the variability of the CPU&#039;s frequency needs to be taken into account. With modern day laptops, most CPU frequencies are adjusted on the fly to respond to the users demand when needed and to lower themselves when idle, this results in longer battery life and less heat generated by the laptop but regretfully affects RDTSC making it unreliable. [10]&lt;br /&gt;
# HPET defines a set of timers that the OS has access to and can assign to applications. Each timer can generate an interrupt when the least significant bits are equal to the equivalent bits of the 64-bit counter value. However, a race case can occur in which the target time has already passed. This causes more interrupts and more work even if the task is a simple one. It does produce less interrupts than its predecessors PIT and CMOS RTC giving it an edge. Despite its race condition, this modern timer is improvement upon old practices.  [11]&lt;br /&gt;
&lt;br /&gt;
==Guest Timekeeping==&lt;br /&gt;
&lt;br /&gt;
Guest timekeeping is done using the same general methods as any computer timekeeping: either tick counting or tickless systems. Where the two begin to differ, however, is that a host operating system is able to communicate directly with the physical hardware, while the guest operating system is unable to do so, having to communicate to the host system that it wants to communicate with the hardware. Having to do this is the greatest source of the guest operating system&#039;s clock losing accuracy, otherwise known as drifting.&lt;br /&gt;
&lt;br /&gt;
===Sources of Drift===&lt;br /&gt;
&lt;br /&gt;
When a guest operating system is started, its clock simply synchronizes with the host&#039;s – some virtual machines such as VMware also do this when they are resumed from a suspended state, or restored from a snapshot. It is easy to reason that, because the guest&#039;s clock starts off correctly, it will always be correct from then on. Unfortunately, this is not the case. The first source of drift is simply due to electronics. A clock is almost never entirely accurate, having a slight error due to the effects of ambient temperature on oscillator frequency, even on the host system. Since the guest communicates with the host in order to keep track of its time, an error in the host&#039;s time is not only passed on to the guest, but because the host is trying to correct its own time, the guest&#039;s request for a count is given slightly less priority, making it yet again lose accuracy. The larger the drift in the host, the larger the drift in the guest, as the host&#039;s drift simply compounds the issue.&lt;br /&gt;
&lt;br /&gt;
Aside from the host&#039;s own drift, the other cause of drift in the virtual environment is the fact that it is treated like a process by the host. In and of itself this does not seem like a problem, but the guest system can be suspended just before it tries to update its perception of time. With restricted CPU time, it is easy for the lost ticks to pile up and create a backlog. Even if the guest is checking over the network with a time server, its network conversation can be suspended before the answer comes back. When the process resumes the guest has a wildly inaccurate perception of the elapsed time (known as Round-Trip Time or RTT) and its incorrect adjustments for the network delay will throw off the clock. Problems can also come from memory swaps performed by the host. If the virtual environment does not have enough allocated to it by the host it can run into the problem of swapping out pages that are needed soon. Swapping the pages back in will momentarily bring the entire virtual environment to a halt, so ticks are missed and the clock falls behind. &lt;br /&gt;
&lt;br /&gt;
Clearly the errors in virtual machine timekeeping come from algorithms that were simply not designed for virtual environments. They assume a more stable physical world than they have.&lt;br /&gt;
&lt;br /&gt;
===Impact of Drift===&lt;br /&gt;
&lt;br /&gt;
The sources of drift essentially boil down to round-off errors and lost ticks. But the practical impact of drift is quite apparent in any automated system. For a real-world analogy consider a factory&#039;s assembly line, where the machinery is finely tuned to do its own specific part at certain intervals, and generally does this with impressive efficiency. If the clock in the system were to drift, however, a specific machine may move too soon or too late, bringing the line to a potentially catastrophic halt.&lt;br /&gt;
&lt;br /&gt;
In a virtual environment, drift is a bit more subtle, as one result of it could be skewed process scheduling – some schedulers give a certain amount of time to a process before moving on, but if the guest&#039;s time has drifted substantially, when it tries to correct its time it could give more or less time to the processes in the scheduler.&lt;br /&gt;
&lt;br /&gt;
The impacts of drift are even more apparent when virtual machines must communicate with one another. Consider the case of a farm of distributed gaming servers, where differences in timing can mean virtual characters suddenly warp at super-human speed across the landscape. Or for a more serious example consider investment trading, where missing the moment to bid can mean a difference of millions of dollars.&lt;br /&gt;
&lt;br /&gt;
===Compensation Strategies===&lt;br /&gt;
&lt;br /&gt;
There are a number of compensation strategies for dealing with drift, depending on the cause of it. If the problem is due to CPU management issues, then the host can give more CPU time to the virtual machine, or it can lower the timer interrupt rate – or simply use a tickless counter. If it is due to a  memory management issue, allocating more memory to the virtual environment should prevent the system from needing to swap out page files so often.&lt;br /&gt;
&lt;br /&gt;
If the issue is from neither of those, but simply due to the inevitable lag when the guest communicates with the hardware via the host, then there are other methods to correct the drift. Most systems natively have algorithms built in to correct the time if it gets too far ahead or behind real time, though they are not without their own faults; if the time is set ahead when catching up, the backlog of ticks it has built up may not be cleared, so it could potentially set itself ahead multiple times until the backlog is dealt with. Tools built into the virtual machine itself can also deal with drift to an extent, as VMware Tools does. This kind of tool checks to see if the clock&#039;s error is within a certain margin. If it exceeds the margin, then the backlog is set to zero – to prevent the issue mentioned with the native algorithms – and resynchronizes with the host clock before the guest goes back to keeping track of time as it normally would.&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
&lt;br /&gt;
Today, the use of the Network Time Protocol and of daemons like ntpd is the dominant solution for accurate timekeeping. In optimal conditions, the ntpd can be very good but these situations rarely happen. Network congestion, disconnections, lower quality networking hardware and unsuspected system events can create offsets errors in the order of 10‘s or even 100 milliseconds(ms). [6]&lt;br /&gt;
For demanding applications, this is neither robust nor reliable. One way to enhance the performance of ntpd would be to poll from the NTP server more often as this would reduce the offset error. Unfortunately, this would increase the network traffic which could cause network congestions which would raise the offset error. Thus, it would not work.&lt;br /&gt;
&lt;br /&gt;
Another problem with current system software clocks using NTP(like ntpd), is that they provide only an absolute clock.[7] Such clocks are unsuitable for applications that deal with network management and measurements. The reason for this is that NTP focuses on the offset and not on hardware clock&#039;s oscillator rate. For example, when calculating delay variations, the offset error does not change anything in the calculations but the clocks’ oscillator rate variation does affect it. So having a more accurate timestamp would make those calculation more precise. Which mean we would need another system software clock.&lt;br /&gt;
&lt;br /&gt;
In virtualization(in this case Xen), when migrating a running system from one system to another can cause issues and this is again caused by the ntpd daemon. By default, each guest OS runs its own instance of the ntpd daemon. So the synchronization algorithm keeps track of the reference wallclock time, rate-of-drift and current clock error, which are defined by the hardware clock on the system. So when migrating the virtualized OS to another system, the ntpd state is saved but when it is enabled again on the new system problems occur: because no two hardware clocks drift the same way or have the exact same wallclock time, all the information traced by the daemon are all of a sudden inaccurate. This consequences could prove disastrous to the system. These could range anywhere from slowly recoverable errors to ones where the ntpd might never recover and where the virtualized OS could become unstable. &lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
&lt;br /&gt;
The authors of this timekeeping paper have done previous work exploring the feed-forward and feedback mechanisms for clock algorithm adjustment in non-virtual systems [7][8]. The RADclock algorithm (Robust Absolute Difference) was originally explored to address the drift resulting from NTP&#039;s feedback algorithm, and how non-ideal network conditions (a circumstance that is quite common) can have serious effects. In their original paper they improved network synchronization using the TimeStamp Counter (TSC) register a system call introduced in Pentium class machines as a source for a CPU cycle count. The use of a more reliable timestamp and counter provided &amp;quot;GPS-like&amp;quot; reliability in networked environments.&lt;br /&gt;
&lt;br /&gt;
This new paper seeks to take a similar approach in a virtual machine setting where VM migration can cause much more severe disruption than simply lost UDP packets. Rather than use TSC calls (&#039;&#039;rdtsc()&#039;&#039; in [8]) they tried several clock sources, seeking to eliminate variability from power management and CPU load when setting &#039;&#039;raw&#039;&#039; timestamps for use in guest machines.&lt;br /&gt;
&lt;br /&gt;
The paper makes several references to &#039;&#039;feed-forward&#039;&#039; and &#039;&#039;feedback&#039;&#039; mechanisms, and so a quick discussion of these control theories is in order. In feed-forward mechanisms, inputs to a process or calculation may be modified in advance, but the resulting output plays no part in subsequent calculations. In feedback mechanisms (such as NTPd) inputs to a calculation can be modified by outputs from previous ones. As a result, feedback mechanisms carry state from one step to the next. Since the state of a virtual environment may be rendered inaccurate by so many sources a feedback mechanism as a bad idea. The statelessness of feed-forward mechanisms confers advantages in VM migrations since a guest OS can simply discover the actual facts of timestamps rather than try to estimate them from their own invalid state information. The RADclock implementation makes use of this type of feed-forward design.&lt;br /&gt;
&lt;br /&gt;
The mechanism the authors used in the Xen environment was the XenStore, a filesystem structure like &#039;&#039;sysfs&#039;&#039; or &#039;&#039;procfs&#039;&#039; that permits communications between virtual domains using shared memory. Dom0 (Xen terminology for the hypervisor) takes sychronization from its own time server and saves the calculated clock parameters to the shared XenStore. On DomU (Xen for any guest OS) an application would simply use the shared information as a raw timestamp or a time difference. Their extensive testing showed a clear -- if not surprising -- advantage over NTPd.&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
The paper clearly addresses the two key problems with a feedback approach:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Stability of classical control approaches such as PLL and FLL cannot be guaranteed. &amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Difference clocks cannot even be defined in a feedback framework, so we lose their benefits, which include not only much higher accuracy, but also far higher robustness.&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Interestingly enough, a feed-forward approach does not in itself guarantee that the clock never moves backwards (a causality enforcing clock-read function can fix this without compromising the core design). The only drawback with implementing the new timing mechanism is with regards to system compatibility. In most Linux systems, the system clock maintained by the kernel is closely tied to the needs of NTP (ntpd). The kernel API&#039;s support the feedback paradigm only. This implies that a feed-forward based mechanism is simply not compatible with the system. Currently, the RADclock gets around the lack of feed-forward support by providing patches that extend the above mechanisms in FreeBSD and Linux in minimal ways to allow raw counter access to both the kernel and user space. The RADclock API includes difference and absolute clock reading functions based on direct counter timestamping, combined with the latest clock parameters and drift estimate.&lt;br /&gt;
&lt;br /&gt;
Another key point to remember is that synchronization over networks is actually impossible. Even the best timing counters/mechanisms are good based on how they manage an asymmetrical jitter. It would also be interesting if some intelligent heuristics were to be implemented with the existing NTP architecture based around the ideas presented in this paper. It could very well turn out to be a boon in disguise, without the need to adopt new kernel standards to support feed-forward algorithms. &lt;br /&gt;
&lt;br /&gt;
However, it is difficult to critique the authors&#039; work since they did a great job of finding a meaningful set of timestamps and counters and clearly demonstrated an advantage in their field of study. They also compared a number of time sources to ensure that their selection was meaningful and stable. And it&#039;s hard to argue with success. Their results approximate the variation that comes from CPU temperature. That&#039;s quite impressive.&lt;br /&gt;
As a student, one criticism might be that they found quite an obscure way of describing what was at heart a very simple problem. Of course, to be fair to these academics, their paper wasn&#039;t written for students. But the problem can be summed up succinctly: If you can&#039;t trust your own perception of time, ask the closest agent you &#039;&#039;can&#039;&#039; trust -- and make sure they&#039;ve check their watch. There is no closer source to a VM than its host, so find the fastest way there. Or if one has enough money and time on their hands, one can simply switch over to a GPS source or an atomic clock, both of which are better than the RADclock [12].&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
1. Broomhead, Cremean, Ridoux, Veitch. &amp;quot;Virtualize Everything But Time&amp;quot; . 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Broomhead.pdf&lt;br /&gt;
&lt;br /&gt;
2. &amp;quot;Timekeeping in Virtual Machines, Information Guide&amp;quot; from VMWare. Web. http://www.vmware.com/files/pdf/Timekeeping-In-VirtualMachines.pdf Accessed: Nov. 2010. &lt;br /&gt;
&lt;br /&gt;
3. &amp;quot;Bran&#039;s Kernel Development Tutorial&amp;quot; from &#039;&#039;Bona Fide OS Developer&#039;&#039; . Web. http://www.osdever.net/bkerndev/Docs/pit.htm  . Accessed: Nov. 2010. &lt;br /&gt;
&lt;br /&gt;
4. &amp;quot;What is a CMOS battery, and why does my computer need one?&amp;quot;  &#039;&#039;Indiana University&#039;s Knowledge Base&#039;&#039;, 2010. Web. http://kb.iu.edu/data/adoy.html . Accessed: Nov. 2010. &lt;br /&gt;
&lt;br /&gt;
5. &amp;quot;Multiprocessor Specification version 1.4&amp;quot;. &#039;&#039;Intel&#039;&#039;. May 1997. http://developer.intel.com/design/pentium/datashts/24201606.pdf .  &lt;br /&gt;
&lt;br /&gt;
6. Pasztor and Veitch. &amp;quot;PC Based Precision Timing Without GPS&amp;quot;.  2002. http://www.cubinlab.ee.unimelb.edu.au/~darryl/Publications/tscclock_final.pdf .&lt;br /&gt;
&lt;br /&gt;
7. Veitch, Ridoux, Korada. &amp;quot;Robust Synchronization of Absolute and Difference Clocks over Networks&amp;quot;. 2009. http://www.cubinlab.ee.unimelb.edu.au/~darryl/Publications/synch_ToN.pdf&lt;br /&gt;
&lt;br /&gt;
8. Broomhead, T.; Ridoux, J.; Veitch, D.; , &amp;quot;Counter availability and characteristics for feed-forward based synchronization,&amp;quot; Precision Clock Synchronization for Measurement, Control and Communication, 2009. ISPCS 2009. International Symposium on , vol., no., pp.1-6, 12-16 Oct. 2009&lt;br /&gt;
&lt;br /&gt;
9. &amp;quot;Advanced Configuration and Power Interface Specification&amp;quot;. &#039;&#039;Intel Corporation&#039;&#039;. April 2010. http://www.acpi.info/DOWNLOADS/ACPIspec40a.pdf. &lt;br /&gt;
&lt;br /&gt;
10. &amp;quot;Game timing and Multicore Processors&amp;quot;. &#039;&#039;msdn&#039;&#039; . Dec 2005. Web. http://msdn.microsoft.com/en-us/library/ee417693%28VS.85%29.aspx . Accessed Nov. 2010.&lt;br /&gt;
&lt;br /&gt;
11. &amp;quot;IA-PC HPET (High Precision Event Timers) Specification. &#039;&#039;Intel Corporation&#039;&#039;. Oct. 2004. http://hackipedia.org/Hardware/HPET,%20High%20Performance%20Event%20Timer/IA-PC%20HPET%20%28High%20Precision%20Event%20Timers%29%20Specification.pdf .&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_11&amp;diff=6882</id>
		<title>COMP 3000 Essay 2 2010 Question 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_11&amp;diff=6882"/>
		<updated>2010-12-03T07:15:39Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Critique */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Virtualize Everything But Time =&lt;br /&gt;
Article written by Timothy Broomhead, Laurence Cremean, Julien Ridoux and Darrel Veitch. They are working for the Center for Ultra-Broadband Information Networks (CUBIN) Department of Electrical &amp;amp; Electronic Engineering at the University of Melbourne in Australia. Here is the link to the article: [http://www.usenix.org/events/osdi10/tech/full_papers/Broomhead.pdf]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
&lt;br /&gt;
The next time you notice one stranger ask another for the time and you see them check their watch, try this experiment: immediately ask too. Chances are the person will check their watch again. Why? Human internal clocks are notoriously unreliable. Our sense of time contracts and expands all day long. We seem to believe that a definitive report of time can only come from some mechanical or electronic source. So social norms require that the watch owner provides you with two things: 1) the time, and 2) a gesture of external authority, i.e. a glance at their watch.&lt;br /&gt;
&lt;br /&gt;
The story of time inside a virtual machine is almost as unreliable as our own internal clocks. How much time has elapsed since a VM client got the CPU&#039;s attention? At the best of times there&#039;s no way for it to guess because it wasn&#039;t actually running the whole time. If the VM was suspended and migrated from one physical host to another its concept of time is even worse. This paper is about how a computer glances at its metaphorical watch, and what kinds of timepieces it has at hand.&lt;br /&gt;
&lt;br /&gt;
To better understand this paper, it is very important to have a good understanding of the general concepts behind it. For example, we all know what clocks are in our day-to-day lives but how are they different in the context of computing? In this section, we will describe concepts like timekeeping, hardware/software clocks, explore the advantages and disadvantages of the different available counters and synchronization algorithms, and explain what a para-virtualized system is about.&lt;br /&gt;
&lt;br /&gt;
===Timekeeping===&lt;br /&gt;
&lt;br /&gt;
Computers typically measure time in one of two ways: tick counting and tickless timekeeping[2]. Tick counting is when the operating system sets up a hardware device, generally a CPU, to interrupt at a certain rate. A counter is updated each time one of these interrupts occurs. It is this counter that allows the system to keep track of the passage of time. &lt;br /&gt;
&lt;br /&gt;
In tickless timekeeping, instead of the OS keeping track of time through interrupts, a hardware device is used instead. This device has its own counter which starts when the system is booted. The OS simply reads the counter from it when needed. Tickless timekeeping seems to be the better way to keep track of time because it doesn’t hog the CPU with hardware interrupts, however its performance is very dependent on the type of hardware used. Another disadvantages is that they tend to drift (see below). But neither of these methods knows the actual time, they only know how long it has been since they last checked an authoritative source. Personal computers typically get their time from a battery-backed real-time clock (i.e. a CMOS clock). Networked machines often need a more precise time, with a resolution in the millisecond range or below. In these cases a machine can query another source such as one based on Network Time Protocol (NTP).&lt;br /&gt;
&lt;br /&gt;
===Clocks===&lt;br /&gt;
&lt;br /&gt;
Computer “clocks” or “timers” can be hardware based, software based or they can even be an hybrid. The most commonly found timer is the hardware timer. All of the hardware timers can be generally described by this diagram where some have either more or less features:&lt;br /&gt;
&lt;br /&gt;
Diagram1. Timer Abstraction&lt;br /&gt;
&lt;br /&gt;
[[File:Timerabstract.jpg]]&lt;br /&gt;
&lt;br /&gt;
This diagram nicely represents how tick counting works. The oscillator runs at a predetermined frequency. The operating system might have to measure it when the system boots. The counter starts with a predetermined value which can be set by software. For every cycle of the oscillator, the counter counts down one unit. When it reaches zero, its generates an output signal that might interrupt the CPU. That same interrupt will then allow the counter’s initial value to be reloaded into the counter and the process begins again. Not all hardware timers work exactly like that. For instance, some actually count up, others don&#039;t use interrupts, and yet others don&#039;t keep an initial counter. The general principle of hardware counters is the however the same. There is some kind of fixed interval at the end of which the current time is updated by an appropriate number of units (i.e. nanoseconds).&lt;br /&gt;
&lt;br /&gt;
===Timers===&lt;br /&gt;
# PIT is useful for generating interrupts at regular intervals through its three channels. Channel 0 is bound to IRQ0 which interrupts the CPU at regular intervals. Channel 1 is specific to each system and Channel 2 is connected to the speaker system. As such, we only need to concern ourselves with Channel 0. [3]&lt;br /&gt;
# CMOS RTC, also known as a CMOS battery, allows the CMOS chip to remain powered to keep track of things like time even while the physical PC unit has no source of power. If there is no CMOS battery on the motherboard, the computer would reset to its default time each restart. The battery itself can die, as expected, if the computer is powered off and not used for a long period of time. This can cause issues with the main OS as well as the VM. [4]&lt;br /&gt;
# Local APIC handles all external interrupts for the processor in the system. It can also accept and generate inter-processor interrupts between Local APICs. [5]&lt;br /&gt;
# ACPI establishes industry-standard interfaces configuration guided by the OS and power management. It is industry-standard through its creators, Intel, Microsoft, Phoenix, Hewlett Packard and Toshiba. Its power management includes all forms: notebooks, desktops, and servers. ACPI&#039;s goal is to improve current power and configuration standards for hardware devices by transitioning to ACPI-compliant hardware. This allows the OS as well as the VM to have control over power management. [9]&lt;br /&gt;
# RDTSC is based on the x86 P5 instruction set and perform high-resolution timing, however, it suffers from several flaws. Discontinuous values from the processor are caused as a result of not using the same thread to the processor each time, which can also be caused by having a multicore processor. This is made worse by ACPI which will eventually lead to the cores being completely out of sync. Availability of dedicated hardware: &amp;quot;RDTSC locks the timing information that the application requests to the processor&#039;s cycle counter.&amp;quot; With dedicated timing devices included on modern motherboards this method of locking the timing information will become obsolete. Lastly, the variability of the CPU&#039;s frequency needs to be taken into account. With modern day laptops, most CPU frequencies are adjusted on the fly to respond to the users demand when needed and to lower themselves when idle, this results in longer battery life and less heat generated by the laptop but regretfully affects RDTSC making it unreliable. [10]&lt;br /&gt;
# HPET defines a set of timers that the OS has access to and can assign to applications. Each timer can generate an interrupt when the least significant bits are equal to the equivalent bits of the 64-bit counter value. However, a race case can occur in which the target time has already passed. This causes more interrupts and more work even if the task is a simple one. It does produce less interrupts than its predecessors PIT and CMOS RTC giving it an edge. Despite its race condition, this modern timer is improvement upon old practices.  [11]&lt;br /&gt;
&lt;br /&gt;
==Guest Timekeeping==&lt;br /&gt;
&lt;br /&gt;
Guest timekeeping is done using the same general methods as any computer timekeeping: either tick counting or tickless systems. Where the two begin to differ, however, is that a host operating system is able to communicate directly with the physical hardware, while the guest operating system is unable to do so, having to communicate to the host system that it wants to communicate with the hardware. Having to do this is the greatest source of the guest operating system&#039;s clock losing accuracy, otherwise known as drifting.&lt;br /&gt;
&lt;br /&gt;
===Sources of Drift===&lt;br /&gt;
&lt;br /&gt;
When a guest operating system is started, its clock simply synchronizes with the host&#039;s – some virtual machines such as VMware also do this when they are resumed from a suspended state, or restored from a snapshot. It is easy to reason that, because the guest&#039;s clock starts off correctly, it will always be correct from then on. Unfortunately, this is not the case. The first source of drift is simply due to electronics. A clock is almost never entirely accurate, having a slight error due to the effects of ambient temperature on oscillator frequency, even on the host system. Since the guest communicates with the host in order to keep track of its time, an error in the host&#039;s time is not only passed on to the guest, but because the host is trying to correct its own time, the guest&#039;s request for a count is given slightly less priority, making it yet again lose accuracy. The larger the drift in the host, the larger the drift in the guest, as the host&#039;s drift simply compounds the issue.&lt;br /&gt;
&lt;br /&gt;
Aside from the host&#039;s own drift, the other cause of drift in the virtual environment is the fact that it is treated like a process by the host. In and of itself this does not seem like a problem, but the guest system can be suspended just before it tries to update its perception of time. With restricted CPU time, it is easy for the lost ticks to pile up and create a backlog. Even if the guest is checking over the network with a time server, its network conversation can be suspended before the answer comes back. When the process resumes the guest has a wildly inaccurate perception of the elapsed time (known as Round-Trip Time or RTT) and its incorrect adjustments for the network delay will throw off the clock. Problems can also come from memory swaps performed by the host. If the virtual environment does not have enough allocated to it by the host it can run into the problem of swapping out pages that are needed soon. Swapping the pages back in will momentarily bring the entire virtual environment to a halt, so ticks are missed and the clock falls behind. &lt;br /&gt;
&lt;br /&gt;
Clearly the errors in virtual machine timekeeping come from algorithms that were simply not designed for virtual environments. They assume a more stable physical world than they have.&lt;br /&gt;
&lt;br /&gt;
===Impact of Drift===&lt;br /&gt;
&lt;br /&gt;
The sources of drift essentially boil down to round-off errors and lost ticks. But the practical impact of drift is quite apparent in any automated system. For a real-world analogy consider a factory&#039;s assembly line, where the machinery is finely tuned to do its own specific part at certain intervals, and generally does this with impressive efficiency. If the clock in the system were to drift, however, a specific machine may move too soon or too late, bringing the line to a potentially catastrophic halt.&lt;br /&gt;
&lt;br /&gt;
In a virtual environment, drift is a bit more subtle, as one result of it could be skewed process scheduling – some schedulers give a certain amount of time to a process before moving on, but if the guest&#039;s time has drifted substantially, when it tries to correct its time it could give more or less time to the processes in the scheduler.&lt;br /&gt;
&lt;br /&gt;
The impacts of drift are even more apparent when virtual machines must communicate with one another. Consider the case of a farm of distributed gaming servers, where differences in timing can mean virtual characters suddenly warp at super-human speed across the landscape. Or for a more serious example consider investment trading, where missing the moment to bid can mean a difference of millions of dollars.&lt;br /&gt;
&lt;br /&gt;
===Compensation Strategies===&lt;br /&gt;
&lt;br /&gt;
There are a number of compensation strategies for dealing with drift, depending on the cause of it. If the problem is due to CPU management issues, then the host can give more CPU time to the virtual machine, or it can lower the timer interrupt rate – or simply use a tickless counter. If it is due to a  memory management issue, allocating more memory to the virtual environment should prevent the system from needing to swap out page files so often.&lt;br /&gt;
&lt;br /&gt;
If the issue is from neither of those, but simply due to the inevitable lag when the guest communicates with the hardware via the host, then there are other methods to correct the drift. Most systems natively have algorithms built in to correct the time if it gets too far ahead or behind real time, though they are not without their own faults; if the time is set ahead when catching up, the backlog of ticks it has built up may not be cleared, so it could potentially set itself ahead multiple times until the backlog is dealt with. Tools built into the virtual machine itself can also deal with drift to an extent, as VMware Tools does. This kind of tool checks to see if the clock&#039;s error is within a certain margin. If it exceeds the margin, then the backlog is set to zero – to prevent the issue mentioned with the native algorithms – and resynchronizes with the host clock before the guest goes back to keeping track of time as it normally would.&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
&lt;br /&gt;
Today, the use of the Network Time Protocol and of daemons like ntpd is the dominant solution for accurate timekeeping. In optimal conditions, the ntpd can be very good but these situations rarely happen. Network congestion, disconnections, lower quality networking hardware and unsuspected system events can create offsets errors in the order of 10‘s or even 100 milliseconds(ms). [6]&lt;br /&gt;
For demanding applications, this is neither robust nor reliable. One way to enhance the performance of ntpd would be to poll from the NTP server more often as this would reduce the offset error. Unfortunately, this would increase the network traffic which could cause network congestions which would raise the offset error. Thus, it would not work.&lt;br /&gt;
&lt;br /&gt;
Another problem with current system software clocks using NTP(like ntpd), is that they provide only an absolute clock.[7] Such clocks are unsuitable for applications that deal with network management and measurements. The reason for this is that NTP focuses on the offset and not on hardware clock&#039;s oscillator rate. For example, when calculating delay variations, the offset error does not change anything in the calculations but the clocks’ oscillator rate variation does affect it. So having a more accurate timestamp would make those calculation more precise. Which mean we would need another system software clock.&lt;br /&gt;
&lt;br /&gt;
In virtualization(in this case Xen), when migrating a running system from one system to another can cause issues and this is again caused by the ntpd daemon. By default, each guest OS runs its own instance of the ntpd daemon. So the synchronization algorithm keeps track of the reference wallclock time, rate-of-drift and current clock error, which are defined by the hardware clock on the system. So when migrating the virtualized OS to another system, the ntpd state is saved but when it is enabled again on the new system problems occur: because no two hardware clocks drift the same way or have the exact same wallclock time, all the information traced by the daemon are all of a sudden inaccurate. This consequences could prove disastrous to the system. These could range anywhere from slowly recoverable errors to ones where the ntpd might never recover and where the virtualized OS could become unstable. &lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
&lt;br /&gt;
The authors of this timekeeping paper have done previous work exploring the feed-forward and feedback mechanisms for clock algorithm adjustment in non-virtual systems [7][8]. The RADclock algorithm (Robust Absolute Difference) was originally explored to address the drift resulting from NTP&#039;s feedback algorithm, and how non-ideal network conditions (a circumstance that is quite common) can have serious effects. In their original paper they improved network synchronization using the TimeStamp Counter (TSC) register a system call introduced in Pentium class machines as a source for a CPU cycle count. The use of a more reliable timestamp and counter provided &amp;quot;GPS-like&amp;quot; reliability in networked environments.&lt;br /&gt;
&lt;br /&gt;
This new paper seeks to take a similar approach in a virtual machine setting where VM migration can cause much more severe disruption than simply lost UDP packets. Rather than use TSC calls (&#039;&#039;rdtsc()&#039;&#039; in [8]) they tried several clock sources, seeking to eliminate variability from power management and CPU load when setting &#039;&#039;raw&#039;&#039; timestamps for use in guest machines.&lt;br /&gt;
&lt;br /&gt;
The paper makes several references to &#039;&#039;feed-forward&#039;&#039; and &#039;&#039;feedback&#039;&#039; mechanisms, and so a quick discussion of these control theories is in order. In feed-forward mechanisms, inputs to a process or calculation may be modified in advance, but the resulting output plays no part in subsequent calculations. In feedback mechanisms (such as NTPd) inputs to a calculation can be modified by outputs from previous ones. As a result, feedback mechanisms carry state from one step to the next. Since the state of a virtual environment may be rendered inaccurate by so many sources a feedback mechanism as a bad idea. The statelessness of feed-forward mechanisms confers advantages in VM migrations since a guest OS can simply discover the actual facts of timestamps rather than try to estimate them from their own invalid state information. The RADclock implementation makes use of this type of feed-forward design.&lt;br /&gt;
&lt;br /&gt;
The mechanism the authors used in the Xen environment was the XenStore, a filesystem structure like &#039;&#039;sysfs&#039;&#039; or &#039;&#039;procfs&#039;&#039; that permits communications between virtual domains using shared memory. Dom0 (Xen terminology for the hypervisor) takes sychronization from its own time server and saves the calculated clock parameters to the shared XenStore. On DomU (Xen for any guest OS) an application would simply use the shared information as a raw timestamp or a time difference. Their extensive testing showed a clear -- if not surprising -- advantage over NTPd.&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
The paper clearly addresses the two key problems with a feedback approach:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Stability of classical control approaches such as PLL and FLL cannot be guaranteed. &amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Difference clocks cannot even be defined in a feedback framework, so we lose their benefits, which include not only much higher accuracy, but also far higher robustness.&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Interestingly enough, a feed-forward approach does not in itself guarantee that the clock never moves backwards (a causality enforcing clock-read function can fix this without compromising the core design). The only drawback with implementing the new timing mechanism is with regards to system compatibility. In most Linux systems, the system clock maintained by the kernel is closely tied to the needs of NTP (ntpd). The kernel API&#039;s support the feedback paradigm only. This implies that a feed-forward based mechanism is simply not compatible with the system. Currently, the RADclock gets around the lack of feed-forward support by providing patches that extend the above mechanisms in FreeBSD and Linux in minimal ways to allow raw counter access to both the kernel and user space. The RADclock API includes difference and absolute clock reading functions based on direct counter timestamping, combined with the latest clock parameters and drift estimate.&lt;br /&gt;
&lt;br /&gt;
Another key point to remember is that synchronization over networks is actually impossible. Even the best timing counters/mechanisms are good based on how they manage an asymmetrical jitter. It would also be interesting if some intelligent heuristics were to be implemented with the existing NTP architecture based around the ideas presented in this paper. It could very well turn out to be a boon in disguise, without the need to adopt new kernel standards to support feed-forward algorithms. &lt;br /&gt;
&lt;br /&gt;
However, it is difficult to critique the authors&#039; work since they did a great job of finding a meaningful set of timestamps and counters and clearly demonstrated an advantage in their field of study. They also compared a number of time sources to ensure that their selection was meaningful and stable. And it&#039;s hard to argue with success. Their results approximate the variation that comes from CPU temperature. That&#039;s quite impressive.&lt;br /&gt;
As a student, one criticism might be that they found quite an obscure way of describing what was at heart a very simple problem. Of course, to be fair to these academics, their paper wasn&#039;t written for students. But the problem can be summed up succinctly: If you can&#039;t trust your own perception of time, ask the closest agent you &#039;&#039;can&#039;&#039; trust -- and make sure they&#039;ve check their watch. There is no closer source to a VM than its host, so find the fastest way there. Or if one has enough money and time on their hands, one can simply switch over to a GPS source or an atomic clock, both of which are better than the RADclock.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
1. Broomhead, Cremean, Ridoux, Veitch. &amp;quot;Virtualize Everything But Time&amp;quot; . 2010. http://www.usenix.org/events/osdi10/tech/full_papers/Broomhead.pdf&lt;br /&gt;
&lt;br /&gt;
2. &amp;quot;Timekeeping in Virtual Machines, Information Guide&amp;quot; from VMWare. Web. http://www.vmware.com/files/pdf/Timekeeping-In-VirtualMachines.pdf Accessed: Nov. 2010. &lt;br /&gt;
&lt;br /&gt;
3. &amp;quot;Bran&#039;s Kernel Development Tutorial&amp;quot; from &#039;&#039;Bona Fide OS Developer&#039;&#039; . Web. http://www.osdever.net/bkerndev/Docs/pit.htm  . Accessed: Nov. 2010. &lt;br /&gt;
&lt;br /&gt;
4. &amp;quot;What is a CMOS battery, and why does my computer need one?&amp;quot;  &#039;&#039;Indiana University&#039;s Knowledge Base&#039;&#039;, 2010. Web. http://kb.iu.edu/data/adoy.html . Accessed: Nov. 2010. &lt;br /&gt;
&lt;br /&gt;
5. &amp;quot;Multiprocessor Specification version 1.4&amp;quot;. &#039;&#039;Intel&#039;&#039;. May 1997. http://developer.intel.com/design/pentium/datashts/24201606.pdf .  &lt;br /&gt;
&lt;br /&gt;
6. Pasztor and Veitch. &amp;quot;PC Based Precision Timing Without GPS&amp;quot;.  2002. http://www.cubinlab.ee.unimelb.edu.au/~darryl/Publications/tscclock_final.pdf .&lt;br /&gt;
&lt;br /&gt;
7. Veitch, Ridoux, Korada. &amp;quot;Robust Synchronization of Absolute and Difference Clocks over Networks&amp;quot;. 2009. http://www.cubinlab.ee.unimelb.edu.au/~darryl/Publications/synch_ToN.pdf&lt;br /&gt;
&lt;br /&gt;
8. Broomhead, T.; Ridoux, J.; Veitch, D.; , &amp;quot;Counter availability and characteristics for feed-forward based synchronization,&amp;quot; Precision Clock Synchronization for Measurement, Control and Communication, 2009. ISPCS 2009. International Symposium on , vol., no., pp.1-6, 12-16 Oct. 2009&lt;br /&gt;
&lt;br /&gt;
9. &amp;quot;Advanced Configuration and Power Interface Specification&amp;quot;. &#039;&#039;Intel Corporation&#039;&#039;. April 2010. http://www.acpi.info/DOWNLOADS/ACPIspec40a.pdf. &lt;br /&gt;
&lt;br /&gt;
10. &amp;quot;Game timing and Multicore Processors&amp;quot;. &#039;&#039;msdn&#039;&#039; . Dec 2005. Web. http://msdn.microsoft.com/en-us/library/ee417693%28VS.85%29.aspx . Accessed Nov. 2010.&lt;br /&gt;
&lt;br /&gt;
11. &amp;quot;IA-PC HPET (High Precision Event Timers) Specification. &#039;&#039;Intel Corporation&#039;&#039;. Oct. 2004. http://hackipedia.org/Hardware/HPET,%20High%20Performance%20Event%20Timer/IA-PC%20HPET%20%28High%20Precision%20Event%20Timers%29%20Specification.pdf .&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_11&amp;diff=6805</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_11&amp;diff=6805"/>
		<updated>2010-12-03T04:50:04Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;--[[User:AbsMechanik|AbsMechanik]] 04:50, 3 December 2010 (UTC) I guess I owe it to the lack of sleep, but I finally came across a technical paper which raises a few interesting points for the critique section. Here&#039;s the link: http://queue.acm.org/detail.cfm?id=1773943&lt;br /&gt;
&lt;br /&gt;
--[[User:ScottG|ScottG]] 22:21, 2 December 2010 (UTC) As I mentioned, the info came from, mostly, &#039;Timekeeping in Virtual Machines&#039; (2nd point in the References section). Also, technically speaking direct citations are only ever required when taking quotes from something or using specific numbers, and since I didn&#039;t use either, didn&#039;t see the point in citing. Which seems lazy, but I&#039;ve never lost any marks for a lack of citing on essays before, so figure it&#039;s probably not a big deal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The guest time-keeping section is really good but requires citations. Does someone know where exactly the info came from? - Fedor&lt;br /&gt;
&lt;br /&gt;
Hi, I&#039;m making some cosmetic changes to style, grammar&amp;amp;citation-format.  - Fedor&lt;br /&gt;
&lt;br /&gt;
--[[User:Spanke|Spanke]] 00:19, 2 December 2010 (UTC) Finished Timers, I hate 3004...&lt;br /&gt;
&lt;br /&gt;
--[[User:Jjpwilso|Jjpwilso]] 00:03, 2 December 2010 (UTC) I&#039;ll check for some more references on the ACM and IEEE databases. In the meantime I thought I&#039;d mention what Anil said regarding critique. He suggested we should consider other approaches to the same solution, such as modifying NTP with a different heuristic. I&#039;ll see what I can dig up in other papers on NTP.&lt;br /&gt;
&lt;br /&gt;
--[[User:ScottG|ScottG]] 22:06, 1 December 2010 (UTC) I&#039;m assuming you meant for me to add my references, yes? I really only used the article, and &#039;Timekeeping in Virtual Machines&#039; which I went to add, but is already on there. I&#039;ve looked for other articles to try to get how others have looked at it that aren&#039;t VMware, but there really isn&#039;t a huge amount out there dealing &#039;&#039;specifically&#039;&#039; with guest timekeeping (unless I&#039;ve gone Google-blind, which has admittedly happened before). Mostly I ran into links pointing to that specific article.&lt;br /&gt;
&lt;br /&gt;
--[[User:Sblais2|Sblais2]] 17:50, 1 December 2010 (UTC) I added stuf into the Research problems. I think I summarized most of them. If I forgot any, please add them in. I also added the missing references in the reference section. For Fedor, we seem to miss some content in 2 sections. Also, you could read through the other section and add/change some pertinent information that might&#039;ve been missed that would make this essay even better.&lt;br /&gt;
&lt;br /&gt;
--[[User:Sblais2|Sblais2]] 15:11, 1 December 2010 (UTC) Would it be possible to add your references at the bottom please? Even if it is a link. I have added the article link at the top of the essay.&lt;br /&gt;
&lt;br /&gt;
Hey guys, sorry its taken me a while to post here. If there is a particular topic that needs researching, I could spend some hours doing that tomorrow - suggestions? Also, I intend to fix up the style&amp;amp;structure after everything is done as I am quite good with that. &lt;br /&gt;
&lt;br /&gt;
Fedor&lt;br /&gt;
&lt;br /&gt;
--[[User:ScottG|ScottG]] 21:36, 26 November 2010 (UTC) So I was a little (more than a little) behind on my initially estimated time for getting stuff up on Guest Timekeeping, but that&#039;s the gist of it there now. I&#039;m going to try to buff it up a bit before it&#039;s due, since what I put in is a bit rougher than I&#039;d like. If I seem to be missing something that should be pretty obvious, let me know and I&#039;ll work it in.&lt;br /&gt;
&lt;br /&gt;
--[[User:Jjpwilso|Jjpwilso]] 15:49, 23 November 2010 (UTC) I&#039;ve been completely swamped with COMP3004 stuff (among other things) and feeling guilty as hell about this essay. The good news, for those who might have missed today&#039;s lecture, is we have an extension of one week. Phew!!&lt;br /&gt;
&lt;br /&gt;
--[[User:Sblais2|Sblais2]] 21:29, 22 November 2010 (UTC) I have added a small part to the background section. I have created by hand a diagram explaining how it works. I tried to find an original way of doing it but it is the same diagram everywhere. Please feel free to comment here or by sending me an email.&lt;br /&gt;
&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 19:46, 22 November 2010 (UTC) Here&#039;s what my research has led me to so far. I&#039;m trying to come up with good points for the research problem, contribution and critique part of this essay. Here&#039;s a bunch of links, I&#039;ve come across. I think there will be a few more tonight. Feel free to read through &#039;em: &lt;br /&gt;
&amp;lt;br&amp;gt;http://www.vmware.com/files/pdf/Timekeeping-In-VirtualMachines.pdf&lt;br /&gt;
&amp;lt;br&amp;gt;http://www.xen.org/files/xen_interface.pdf&lt;br /&gt;
&amp;lt;br&amp;gt;http://www.microsoft.com/whdc/system/sysinternals/mm-timer.mspx&lt;br /&gt;
&amp;lt;br&amp;gt;http://www.intel.com/hardwaredesign/hpetspec_1.pdf&lt;br /&gt;
&amp;lt;br&amp;gt;http://www.cubinlab.ee.unimelb.edu.au/radclock/&lt;br /&gt;
&lt;br /&gt;
--[[User:ScottG|ScottG]] 18:55, 22 November 2010 (UTC) I&#039;m good taking the Guest Timekeeping section. Hopefully I&#039;ll have some stuff up tonight or early tomorrow for it.&lt;br /&gt;
&lt;br /&gt;
--[[User:Sblais2|Sblais2]] 17:14, 22 November 2010 (UTC) I will be working on the Background section. I will dedicate it to explain some of the key concepts that are used in the research paper that will allow the readers to have a better understanding on the rest of our essay. The structure you&#039;ve put in place looks good but it might get modified, depending on the text will flow. The diagram is a good idea. I will drawn a simple one and add it in. Feel free again to critique.&lt;br /&gt;
&lt;br /&gt;
--[[User:Jjpwilso|Jjpwilso]] 15:12, 16 November 2010 (UTC) I wanted to get a structure started, so I have stubbed out the first section. Note: some of the sub-sections might belong in the Research Problem section but we can easily move them if they fit there. Let&#039;s use this area to plan who is doing what. Feel free to critique any of my submissions. When you comment here, please put your comments at the very top so we can easily see recent posts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Participants=&lt;br /&gt;
(X) Blais   Sylvain sblais2 - Email: syl20blais@gmail.com&amp;lt;br&amp;gt;&lt;br /&gt;
(X) Graham  Scott   sgraham6&amp;lt;br&amp;gt;&lt;br /&gt;
(X) Ilitchev Fedor  filitche fedor dot ilitchev at gmail dot com &amp;lt;br&amp;gt; &lt;br /&gt;
(X) Panke   Shane   spanke&amp;lt;br&amp;gt;&lt;br /&gt;
(X) Shukla  Abhinav ashukla2&amp;lt;br&amp;gt;&lt;br /&gt;
(X) Wilson  Robert  jjpwilso&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_11&amp;diff=5366</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_11&amp;diff=5366"/>
		<updated>2010-11-22T19:46:12Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;--[[User:AbsMechanik|AbsMechanik]] 19:46, 22 November 2010 (UTC) Here&#039;s what my research has led me to so far. I&#039;m trying to come up with good points for the research problem, contribution and critique part of this essay. Here&#039;s a bunch of links, I&#039;ve come across. I think there will be a few more tonight. Feel free to read through &#039;em: &lt;br /&gt;
&amp;lt;br&amp;gt;http://www.vmware.com/files/pdf/Timekeeping-In-VirtualMachines.pdf&lt;br /&gt;
&amp;lt;br&amp;gt;http://www.xen.org/files/xen_interface.pdf&lt;br /&gt;
&amp;lt;br&amp;gt;http://www.microsoft.com/whdc/system/sysinternals/mm-timer.mspx&lt;br /&gt;
&amp;lt;br&amp;gt;http://www.intel.com/hardwaredesign/hpetspec_1.pdf&lt;br /&gt;
&amp;lt;br&amp;gt;http://www.cubinlab.ee.unimelb.edu.au/radclock/&lt;br /&gt;
&lt;br /&gt;
--[[User:ScottG|ScottG]] 18:55, 22 November 2010 (UTC) I&#039;m good taking the Guest Timekeeping section. Hopefully I&#039;ll have some stuff up tonight or early tomorrow for it.&lt;br /&gt;
&lt;br /&gt;
--[[User:Sblais2|Sblais2]] 17:14, 22 November 2010 (UTC) I will be working on the Background section. I will dedicate it to explain some of the key concepts that are used in the research paper that will allow the readers to have a better understanding on the rest of our essay. The structure you&#039;ve put in place looks good but it might get modified, depending on the text will flow. The diagram is a good idea. I will drawn a simple one and add it in. Feel free again to critique.&lt;br /&gt;
&lt;br /&gt;
--[[User:Jjpwilso|Jjpwilso]] 15:12, 16 November 2010 (UTC) I wanted to get a structure started, so I have stubbed out the first section. Note: some of the sub-sections might belong in the Research Problem section but we can easily move them if they fit there. Let&#039;s use this area to plan who is doing what. Feel free to critique any of my submissions. When you comment here, please put your comments at the very top so we can easily see recent posts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Participants=&lt;br /&gt;
(X) Blais   Sylvain sblais2 - Email: syl20blais@gmail.com&amp;lt;br&amp;gt;&lt;br /&gt;
(X) Graham  Scott   sgraham6&amp;lt;br&amp;gt;&lt;br /&gt;
(X) Ilitchev Fedor  filitche fedor dot ilitchev at gmail dot com &amp;lt;br&amp;gt; &lt;br /&gt;
(X) Panke   Shane   spanke&amp;lt;br&amp;gt;&lt;br /&gt;
(X) Shukla  Abhinav ashukla2&amp;lt;br&amp;gt;&lt;br /&gt;
(X) Wilson  Robert  jjpwilso&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_11&amp;diff=4932</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 11</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_11&amp;diff=4932"/>
		<updated>2010-11-12T00:37:41Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Please mark an X if you are able to participate.&lt;br /&gt;
&lt;br /&gt;
(X) Blais   Sylvain sblais2&amp;lt;br&amp;gt;&lt;br /&gt;
(X) Graham  Scott   sgraham6&amp;lt;br&amp;gt;&lt;br /&gt;
( ) Ilitchev Fedor  filitche&amp;lt;br&amp;gt;&lt;br /&gt;
( ) Panke   Shane   spanke&amp;lt;br&amp;gt;&lt;br /&gt;
(X) Shukla  Abhinav ashukla2&amp;lt;br&amp;gt;&lt;br /&gt;
(X) Wilson  Robert  jjpwilso&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=4683</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=4683"/>
		<updated>2010-10-15T09:43:37Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Answer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with, all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]] 22:27, 13 October 2010 (UTC))&lt;br /&gt;
(Modified by [[User:Mike Preston|Mike Preston]] and [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
(edited by --[[User:AbsMechanik|AbsMechanik]] 09:43, 15 October 2010 (UTC))&lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf][http://www.sci.csueastbay.edu/~billard/cs4560/node6.html][http://www.articles.assyriancafe.com/documents/CPU_Scheduling.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (similar to &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; and/or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-tasking. It is a combination of &amp;lt;b&amp;gt; First-Come, First-Serve&amp;lt;/b&amp;gt;, &amp;lt;b&amp;gt;Round-Robin&amp;lt;/b&amp;gt; &amp;amp; &amp;lt;b&amp;gt; Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
There is no one &amp;quot;best&amp;quot; algorithm and most schedulers utilize a combination of the different algorithms, such as the Multi-Level Feedback Queue, which in one way or another was utilized in Win XP/Vista, Linux 2.5-2.6, FreeBSD, Mac OSX, NetBSD and Solaris. &amp;lt;br&amp;gt;One thing for certain is that as computer hardware increases in complexity, such as multiple core CPUs (parallelization), and with the advent of more powerful embedded/mobile devices, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The default BSD/FreeBSD scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The Linux scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 13:21, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from 4.3BSD which itself is a version of the UNIX scheduler [http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]. &lt;br /&gt;
In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. Like most traditional UNIX based systems, the BSD scheduler was designed to work on a single core computer system (with limited I/O) and handle relatively small numbers of processes. As a result, managing resources with an O(n) scheduler did not raise any performance issues. To ensure fairness, the scheduler would switch between processes every 0.1 second (100 milliseconds) in a round-robin format [http://www.thehackademy.net/madchat/ebooks/sched/FreeBSD/the_FreeBSD_process_scheduler.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity with the advent of multi-core CPUs and various new I/O devices, computer programs, naturally, increased in size and complexity to accommodate and manage the new hardware. With CPUs becoming more powerful (derived from &amp;lt;b&amp;gt;Moore&#039;s Law&amp;lt;/b&amp;gt; [http://www.intel.com/technology/mooreslaw/]), the time taken to complete a process decreased significantly. This additional complexity highlighted the problem of having an O(n) scheduler for managing processes, as more items were added to the scheduling algorithm, the performance decreased. With symmetric multiprocessing (&amp;lt;b&amp;gt;SMP&amp;lt;/b&amp;gt;) becoming inevitable (multi-core CPU&#039;s) a better scheduler was required. This was the driving force behind the creation of ULE for the FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
(modified and edited by --[[User:AbsMechanik|AbsMechanik]] 08:51, 15 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads, which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues, into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the original FreeBSD scheduler was not built to handle Symmetric Multiprocessing (SMP) or Symmetric Multithreading (SMT) on multi-core systems. The scheduler was still limited by an O(n) algorithm, which could not efficiently handle the loads required on ever increasingly powerful systems. &lt;br /&gt;
To allow FreeBSD to operate with more modern computer systems, it became clear that a new scheduler would be required, and thus, the ULE scheduler was created.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
(modified by--[[User:AbsMechanik|AbsMechanik]] 08:27, 15 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
ULE was first implemented as part of an &amp;quot;experimental&amp;quot; process (by Jeff Roberson) in FreeBSD v5.1, before being added to the FreeBSD v5.3 development cycle. It was designed with modern hardware and requirements in mind and had proper support for Symmetric Multi-Processing (SMP) (and HTT), Symmetric Multi-Threading (SMT) platforms and handle heavy workloads. Primarily being an event-driven scheduler, ULE utilized a double-queue mechanism (borrowed from Linux&#039;s &amp;lt;b&amp;gt;O(1) scheduler&amp;lt;/b&amp;gt;) for ensuring fairness. This mechanism is briefly outlined as follows[http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Process threads are assigned in 2 queues, &#039;current&#039; and &#039;next&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Each thread is either assigned to &#039;current&#039; or &#039;next&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Process execution first begins in the &#039;current&#039; queue (priority based)&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Once &#039;current&#039; is empty, the &#039;next&#039; and &#039;current&#039; queues are switched and the threads are executed in a similar manner (priority based)&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;All idle threads are stored in a third queue, &#039;idle&#039; and is run only when &#039;current&#039; and &#039;next&#039; are empty&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
It has been implemented as the default scheduler since v7.1 onwards. ULE works really well on both single or uni-processor environments as well as multi-core environments. It prevents unnecessary CPU migration, while making good use of CPU resources. However, 2 key practical problems arose due to the double-queue mechanism [http://jeffr-tech.livejournal.com/3729.html]:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;Implementing queues meant more memory usage and cache hits&amp;lt;/li&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;Having multiple threads on different queues caused problems for the rest of the system&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
This problem was resolved by implementing a circular queue, inspired by the calendar queue data structure. Since there is only one queue, there is little extra cpu overhead. There is more flexibility in deciding how much runtime each thread gets relative to those with a lower priority.&lt;br /&gt;
With these design approaches in mind, ULE is also slightly faster on uni-processor systems than the 4.4BSD.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
(edited &amp;amp; modified by --[[User:AbsMechanik|AbsMechanik]] 08:50, 15 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
Unlike its competitor, FreeBSD, the Linux scheduler has undergone several different cycles of evolution [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html], always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions such as round robin, iterations, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler (to ensure fairness). Eventually, the speed issue was also tackled to ensure that the run times were not affected.&lt;br /&gt;
Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. With the advent of more complicated hardware (multi-core CPU&#039;s), Linux also faced new challenges, as experienced by BSD. Various techniques were employed to accommodate for these changes ranging from the inefficient O(n) scheduler to the CFS (current version) [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html].&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a &amp;lt;b&amp;gt;round robin policy&amp;lt;/b&amp;gt; using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed and used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler that supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also provided more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up its entire time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability, and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction, the CFS (Completely Fair Scheduler) took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model of how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness, the scheduler takes the left-most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable, the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature, which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
&lt;br /&gt;
(This work was done by--[[User:AbsMechanik|AbsMechanik]] 09:33, 15 October 2010 (UTC))&lt;br /&gt;
A few key differences between the Linux &amp;amp; FreeBSD schedulers are outlined as below:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;ULE runs in O(1) time whereas CFS runs in O(log(n)) time&amp;lt;/li&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;ULE is based on run-queues whereas CFS utilizes a red-black tree&amp;lt;/li&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;Context switching is done much faster by ULE than Linux [http://jeffr-tech.livejournal.com/19139.html]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Hardware innovation has primarily been the driving force behind the evolution of the schedulers and with the ever increasing complexity of parallel &amp;amp; distributed systems and managing multi-threaded it is clear that even the current schedulers will undergo some radical changes to meet those new challenges. It remains to be seen what each group comes up with to address these growing challenges&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;br /&gt;
&lt;br /&gt;
3. McKusick, M. K. and Neville-Neil, G. V. 2004. Thread Scheduling in FreeBSD 5.2. Queue 2, 7 (Oct. 2004), 58-64. DOI= http://doi.acm.org/10.1145/1035594.1035622&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=4680</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=4680"/>
		<updated>2010-10-15T09:42:42Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Overview &amp;amp; History */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with, all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]] 22:27, 13 October 2010 (UTC))&lt;br /&gt;
(Modified by [[User:Mike Preston|Mike Preston]] and [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf][http://www.sci.csueastbay.edu/~billard/cs4560/node6.html][http://www.articles.assyriancafe.com/documents/CPU_Scheduling.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (similar to &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; and/or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-tasking. It is a combination of &amp;lt;b&amp;gt; First-Come, First-Serve&amp;lt;/b&amp;gt;, &amp;lt;b&amp;gt;Round-Robin&amp;lt;/b&amp;gt; &amp;amp; &amp;lt;b&amp;gt; Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
There is no one &amp;quot;best&amp;quot; algorithm and most schedulers utilize a combination of the different algorithms, such as the Multi-Level Feedback Queue, which in one way or another was utilized in Win XP/Vista, Linux 2.5-2.6, FreeBSD, Mac OSX, NetBSD and Solaris. &amp;lt;br&amp;gt;One thing for certain is that as computer hardware increases in complexity, such as multiple core CPUs (parallelization), and with the advent of more powerful embedded/mobile devices, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The default BSD/FreeBSD scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The Linux scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 13:21, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from 4.3BSD which itself is a version of the UNIX scheduler [http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]. &lt;br /&gt;
In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. Like most traditional UNIX based systems, the BSD scheduler was designed to work on a single core computer system (with limited I/O) and handle relatively small numbers of processes. As a result, managing resources with an O(n) scheduler did not raise any performance issues. To ensure fairness, the scheduler would switch between processes every 0.1 second (100 milliseconds) in a round-robin format [http://www.thehackademy.net/madchat/ebooks/sched/FreeBSD/the_FreeBSD_process_scheduler.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity with the advent of multi-core CPUs and various new I/O devices, computer programs, naturally, increased in size and complexity to accommodate and manage the new hardware. With CPUs becoming more powerful (derived from &amp;lt;b&amp;gt;Moore&#039;s Law&amp;lt;/b&amp;gt; [http://www.intel.com/technology/mooreslaw/]), the time taken to complete a process decreased significantly. This additional complexity highlighted the problem of having an O(n) scheduler for managing processes, as more items were added to the scheduling algorithm, the performance decreased. With symmetric multiprocessing (&amp;lt;b&amp;gt;SMP&amp;lt;/b&amp;gt;) becoming inevitable (multi-core CPU&#039;s) a better scheduler was required. This was the driving force behind the creation of ULE for the FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
(modified and edited by --[[User:AbsMechanik|AbsMechanik]] 08:51, 15 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads, which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues, into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the original FreeBSD scheduler was not built to handle Symmetric Multiprocessing (SMP) or Symmetric Multithreading (SMT) on multi-core systems. The scheduler was still limited by an O(n) algorithm, which could not efficiently handle the loads required on ever increasingly powerful systems. &lt;br /&gt;
To allow FreeBSD to operate with more modern computer systems, it became clear that a new scheduler would be required, and thus, the ULE scheduler was created.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
(modified by--[[User:AbsMechanik|AbsMechanik]] 08:27, 15 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
ULE was first implemented as part of an &amp;quot;experimental&amp;quot; process (by Jeff Roberson) in FreeBSD v5.1, before being added to the FreeBSD v5.3 development cycle. It was designed with modern hardware and requirements in mind and had proper support for Symmetric Multi-Processing (SMP) (and HTT), Symmetric Multi-Threading (SMT) platforms and handle heavy workloads. Primarily being an event-driven scheduler, ULE utilized a double-queue mechanism (borrowed from Linux&#039;s &amp;lt;b&amp;gt;O(1) scheduler&amp;lt;/b&amp;gt;) for ensuring fairness. This mechanism is briefly outlined as follows[http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Process threads are assigned in 2 queues, &#039;current&#039; and &#039;next&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Each thread is either assigned to &#039;current&#039; or &#039;next&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Process execution first begins in the &#039;current&#039; queue (priority based)&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Once &#039;current&#039; is empty, the &#039;next&#039; and &#039;current&#039; queues are switched and the threads are executed in a similar manner (priority based)&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;All idle threads are stored in a third queue, &#039;idle&#039; and is run only when &#039;current&#039; and &#039;next&#039; are empty&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
It has been implemented as the default scheduler since v7.1 onwards. ULE works really well on both single or uni-processor environments as well as multi-core environments. It prevents unnecessary CPU migration, while making good use of CPU resources. However, 2 key practical problems arose due to the double-queue mechanism [http://jeffr-tech.livejournal.com/3729.html]:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;Implementing queues meant more memory usage and cache hits&amp;lt;/li&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;Having multiple threads on different queues caused problems for the rest of the system&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
This problem was resolved by implementing a circular queue, inspired by the calendar queue data structure. Since there is only one queue, there is little extra cpu overhead. There is more flexibility in deciding how much runtime each thread gets relative to those with a lower priority.&lt;br /&gt;
With these design approaches in mind, ULE is also slightly faster on uni-processor systems than the 4.4BSD.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
(edited &amp;amp; modified by --[[User:AbsMechanik|AbsMechanik]] 08:50, 15 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
Unlike its competitor, FreeBSD, the Linux scheduler has undergone several different cycles of evolution [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html], always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions such as round robin, iterations, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler (to ensure fairness). Eventually, the speed issue was also tackled to ensure that the run times were not affected.&lt;br /&gt;
Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. With the advent of more complicated hardware (multi-core CPU&#039;s), Linux also faced new challenges, as experienced by BSD. Various techniques were employed to accommodate for these changes ranging from the inefficient O(n) scheduler to the CFS (current version) [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html].&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a &amp;lt;b&amp;gt;round robin policy&amp;lt;/b&amp;gt; using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed and used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler that supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also provided more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up its entire time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability, and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction, the CFS (Completely Fair Scheduler) took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model of how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness, the scheduler takes the left-most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable, the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature, which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
&lt;br /&gt;
(This work was done by--[[User:AbsMechanik|AbsMechanik]] 09:33, 15 October 2010 (UTC))&lt;br /&gt;
A few key differences between the Linux &amp;amp; FreeBSD schedulers are outlined as below:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;ULE runs in O(1) time whereas CFS runs in O(log(n)) time&amp;lt;/li&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;ULE is based on run-queues whereas CFS utilizes a red-black tree&amp;lt;/li&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;Context switching is done much faster by ULE than Linux [http://jeffr-tech.livejournal.com/19139.html]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Hardware innovation has primarily been the driving force behind the evolution of the schedulers and with the ever increasing complexity of parallel &amp;amp; distributed systems and managing multi-threaded it is clear that even the current schedulers will undergo some radical changes to meet those new challenges. It remains to be seen what each group comes up with to address these growing challenges&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;br /&gt;
&lt;br /&gt;
3. McKusick, M. K. and Neville-Neil, G. V. 2004. Thread Scheduling in FreeBSD 5.2. Queue 2, 7 (Oct. 2004), 58-64. DOI= http://doi.acm.org/10.1145/1035594.1035622&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=4679</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=4679"/>
		<updated>2010-10-15T09:39:45Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with, all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]] 22:27, 13 October 2010 (UTC))&lt;br /&gt;
(Modified by [[User:Mike Preston|Mike Preston]] and [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf][http://www.sci.csueastbay.edu/~billard/cs4560/node6.html][http://www.articles.assyriancafe.com/documents/CPU_Scheduling.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (similar to &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; and/or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-tasking. It is a combination of &amp;lt;b&amp;gt; First-Come, First-Serve&amp;lt;/b&amp;gt;, &amp;lt;b&amp;gt;Round-Robin&amp;lt;/b&amp;gt; &amp;amp; &amp;lt;b&amp;gt; Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
There is no one &amp;quot;best&amp;quot; algorithm and most schedulers utilize a combination of the different algorithms, such as the Multi-Level Feedback Queue, which in one way or another was utilized in Win XP/Vista, Linux 2.5-2.6, FreeBSD, Mac OSX, NetBSD and Solaris. &amp;lt;br&amp;gt;One thing for certain is that as computer hardware increases in complexity, such as multiple core CPUs (parallelization), and with the advent of more powerful embedded/mobile devices, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The default BSD/FreeBSD scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The Linux scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 13:21, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from 4.3BSD which itself is a version of the UNIX scheduler [http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]. &lt;br /&gt;
In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. Like most traditional UNIX based systems, the BSD scheduler was designed to work on a single core computer system (with limited I/O) and handle relatively small numbers of processes. As a result, managing resources with an O(n) scheduler did not raise any performance issues. To ensure fairness, the scheduler would switch between processes every 0.1 second (100 milliseconds) in a round-robin format [http://www.thehackademy.net/madchat/ebooks/sched/FreeBSD/the_FreeBSD_process_scheduler.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity with the advent of multi-core CPUs and various new I/O devices, computer programs, naturally, increased in size and complexity to accommodate and manage the new hardware. With CPUs becoming more powerful (derived from &amp;lt;b&amp;gt;Moore&#039;s Law&amp;lt;/b&amp;gt; [http://www.intel.com/technology/mooreslaw/]), the time taken to complete a process decreased significantly. This additional complexity highlighted the problem of having an O(n) scheduler for managing processes, as more items were added to the scheduling algorithm, the performance decreased. With symmetric multiprocessing (&amp;lt;b&amp;gt;SMP&amp;lt;/b&amp;gt;) becoming inevitable (multi-core CPU&#039;s) a better scheduler was required. This was the driving force behind the creation of ULE for the FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
(modified and edited by --[[User:AbsMechanik|AbsMechanik]] 08:51, 15 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads, which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues, into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the original FreeBSD scheduler was not built to handle Symmetric Multiprocessing (SMP) or Symmetric Multithreading (SMT) on multi-core systems. The scheduler was still limited by an O(n) algorithm, which could not efficiently handle the loads required on ever increasingly powerful systems. &lt;br /&gt;
To allow FreeBSD to operate with more modern computer systems, it became clear that a new scheduler would be required, and thus, the ULE scheduler was created.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
(modified by--[[User:AbsMechanik|AbsMechanik]] 08:27, 15 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
ULE was first implemented as part of an &amp;quot;experimental&amp;quot; process (by Jeff Roberson) in FreeBSD v5.1, before being added to the FreeBSD v5.3 development cycle. It was designed with modern hardware and requirements in mind and had proper support for Symmetric Multi-Processing (SMP) (and HTT), Symmetric Multi-Threading (SMT) platforms and handle heavy workloads. Primarily being an event-driven scheduler, ULE utilized a double-queue mechanism (borrowed from Linux&#039;s &amp;lt;b&amp;gt;O(1) scheduler&amp;lt;/b&amp;gt;) for ensuring fairness. This mechanism is briefly outlined as follows[http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Process threads are assigned in 2 queues, &#039;current&#039; and &#039;next&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Each thread is either assigned to &#039;current&#039; or &#039;next&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Process execution first begins in the &#039;current&#039; queue (priority based)&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Once &#039;current&#039; is empty, the &#039;next&#039; and &#039;current&#039; queues are switched and the threads are executed in a similar manner (priority based)&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;All idle threads are stored in a third queue, &#039;idle&#039; and is run only when &#039;current&#039; and &#039;next&#039; are empty&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
It has been implemented as the default scheduler since v7.1 onwards. ULE works really well on both single or uni-processor environments as well as multi-core environments. It prevents unnecessary CPU migration, while making good use of CPU resources. However, 2 key practical problems arose due to the double-queue mechanism [http://jeffr-tech.livejournal.com/3729.html]:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;Implementing queues meant more memory usage and cache hits&amp;lt;/li&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;Having multiple threads on different queues caused problems for the rest of the system&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
This problem was resolved by implementing a circular queue, inspired by the calendar queue data structure. Since there is only one queue, there is little extra cpu overhead. There is more flexibility in deciding how much runtime each thread gets relative to those with a lower priority.&lt;br /&gt;
With these design approaches in mind, ULE is also slightly faster on uni-processor systems than the 4.4BSD.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
(edited &amp;amp; modified by --[[User:AbsMechanik|AbsMechanik]] 08:50, 15 October 2010 (UTC))&lt;br /&gt;
Unlike its competitor, FreeBSD, the Linux scheduler has undergone several different cycles of evolution [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html], always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions such as round robin, iterations, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler (to ensure fairness). Eventually, the speed issue was also tackled to ensure that the run times were not affected.&lt;br /&gt;
Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. With the advent of more complicated hardware (multi-core CPU&#039;s), Linux also faced new challenges, as experienced by BSD. Various techniques were employed to accommodate for these changes ranging from the inefficient O(n) scheduler to the CFS (current version) [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html].&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a &amp;lt;b&amp;gt;round robin policy&amp;lt;/b&amp;gt; using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed and used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler that supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also provided more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up its entire time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability, and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction, the CFS (Completely Fair Scheduler) took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model of how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness, the scheduler takes the left-most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable, the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature, which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
&lt;br /&gt;
(This work was done by--[[User:AbsMechanik|AbsMechanik]] 09:33, 15 October 2010 (UTC))&lt;br /&gt;
A few key differences between the Linux &amp;amp; FreeBSD schedulers are outlined as below:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;ULE runs in O(1) time whereas CFS runs in O(log(n)) time&amp;lt;/li&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;ULE is based on run-queues whereas CFS utilizes a red-black tree&amp;lt;/li&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;Context switching is done much faster by ULE than Linux [http://jeffr-tech.livejournal.com/19139.html]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Hardware innovation has primarily been the driving force behind the evolution of the schedulers and with the ever increasing complexity of parallel &amp;amp; distributed systems and managing multi-threaded it is clear that even the current schedulers will undergo some radical changes to meet those new challenges. It remains to be seen what each group comes up with to address these growing challenges&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;br /&gt;
&lt;br /&gt;
3. McKusick, M. K. and Neville-Neil, G. V. 2004. Thread Scheduling in FreeBSD 5.2. Queue 2, 7 (Oct. 2004), 58-64. DOI= http://doi.acm.org/10.1145/1035594.1035622&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=4677</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=4677"/>
		<updated>2010-10-15T09:33:09Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Answer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with, all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]] 22:27, 13 October 2010 (UTC))&lt;br /&gt;
(Modified by [[User:Mike Preston|Mike Preston]] and [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf][http://www.sci.csueastbay.edu/~billard/cs4560/node6.html][http://www.articles.assyriancafe.com/documents/CPU_Scheduling.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (similar to &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; and/or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-tasking. It is a combination of &amp;lt;b&amp;gt; First-Come, First-Serve&amp;lt;/b&amp;gt;, &amp;lt;b&amp;gt;Round-Robin&amp;lt;/b&amp;gt; &amp;amp; &amp;lt;b&amp;gt; Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
There is no one &amp;quot;best&amp;quot; algorithm and most schedulers utilize a combination of the different algorithms, such as the Multi-Level Feedback Queue, which in one way or another was utilized in Win XP/Vista, Linux 2.5-2.6, FreeBSD, Mac OSX, NetBSD and Solaris. &amp;lt;br&amp;gt;One thing for certain is that as computer hardware increases in complexity, such as multiple core CPUs (parallelization), and with the advent of more powerful embedded/mobile devices, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The default BSD/FreeBSD scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The Linux scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 13:21, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from 4.3BSD which itself is a version of the UNIX scheduler [http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]. &lt;br /&gt;
In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. Like most traditional UNIX based systems, the BSD scheduler was designed to work on a single core computer system (with limited I/O) and handle relatively small numbers of processes. As a result, managing resources with an O(n) scheduler did not raise any performance issues. To ensure fairness, the scheduler would switch between processes every 0.1 second (100 milliseconds) in a round-robin format [http://www.thehackademy.net/madchat/ebooks/sched/FreeBSD/the_FreeBSD_process_scheduler.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity with the advent of multi-core CPUs and various new I/O devices, computer programs, naturally, increased in size and complexity to accommodate and manage the new hardware. With CPUs becoming more powerful (derived from &amp;lt;b&amp;gt;Moore&#039;s Law&amp;lt;/b&amp;gt; [http://www.intel.com/technology/mooreslaw/]), the time taken to complete a process decreased significantly. This additional complexity highlighted the problem of having an O(n) scheduler for managing processes, as more items were added to the scheduling algorithm, the performance decreased. With symmetric multiprocessing (&amp;lt;b&amp;gt;SMP&amp;lt;/b&amp;gt;) becoming inevitable (multi-core CPU&#039;s) a better scheduler was required. This was the driving force behind the creation of ULE for the FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
(modified and edited by --[[User:AbsMechanik|AbsMechanik]] 08:51, 15 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads, which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues, into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the original FreeBSD scheduler was not built to handle Symmetric Multiprocessing (SMP) or Symmetric Multithreading (SMT) on multi-core systems. The scheduler was still limited by an O(n) algorithm, which could not efficiently handle the loads required on ever increasingly powerful systems. &lt;br /&gt;
To allow FreeBSD to operate with more modern computer systems, it became clear that a new scheduler would be required, and thus, the ULE scheduler was created.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
(modified by--[[User:AbsMechanik|AbsMechanik]] 08:27, 15 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
ULE was first implemented as part of an &amp;quot;experimental&amp;quot; process (by Jeff Roberson) in FreeBSD v5.1, before being added to the FreeBSD v5.3 development cycle. It was designed with modern hardware and requirements in mind and had proper support for Symmetric Multi-Processing (SMP) (and HTT), Symmetric Multi-Threading (SMT) platforms and handle heavy workloads. Primarily being an event-driven scheduler, ULE utilized a double-queue mechanism (borrowed from Linux&#039;s &amp;lt;b&amp;gt;O(1) scheduler&amp;lt;/b&amp;gt;) for ensuring fairness. This mechanism is briefly outlined as follows[http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Process threads are assigned in 2 queues, &#039;current&#039; and &#039;next&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Each thread is either assigned to &#039;current&#039; or &#039;next&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Process execution first begins in the &#039;current&#039; queue (priority based)&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Once &#039;current&#039; is empty, the &#039;next&#039; and &#039;current&#039; queues are switched and the threads are executed in a similar manner (priority based)&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;All idle threads are stored in a third queue, &#039;idle&#039; and is run only when &#039;current&#039; and &#039;next&#039; are empty&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
It has been implemented as the default scheduler since v7.1 onwards. ULE works really well on both single or uni-processor environments as well as multi-core environments. It prevents unnecessary CPU migration, while making good use of CPU resources. However, 2 key practical problems arose due to the double-queue mechanism [http://jeffr-tech.livejournal.com/3729.html]:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;Implementing queues meant more memory usage and cache hits&amp;lt;/li&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;Having multiple threads on different queues caused problems for the rest of the system&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
This problem was resolved by implementing a circular queue, inspired by the calendar queue data structure. Since there is only one queue, there is little extra cpu overhead. There is more flexibility in deciding how much runtime each thread gets relative to those with a lower priority.&lt;br /&gt;
With these design approaches in mind, ULE is also slightly faster on uni-processor systems than the 4.4BSD.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
(edited &amp;amp; modified by --[[User:AbsMechanik|AbsMechanik]] 08:50, 15 October 2010 (UTC))&lt;br /&gt;
Unlike its competitor, FreeBSD, the Linux scheduler has undergone several different cycles of evolution [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html], always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions such as round robin, iterations, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler (to ensure fairness). Eventually, the speed issue was also tackled to ensure that the run times were not affected.&lt;br /&gt;
Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. With the advent of more complicated hardware (multi-core CPU&#039;s), Linux also faced new challenges, as experienced by BSD. Various techniques were employed to accommodate for these changes ranging from the inefficient O(n) scheduler to the CFS (current version) [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html].&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a &amp;lt;b&amp;gt;round robin policy&amp;lt;/b&amp;gt; using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed and used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler that supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also provided more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up its entire time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability, and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction, the CFS (Completely Fair Scheduler) took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model of how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness, the scheduler takes the left-most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable, the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature, which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
&lt;br /&gt;
(This work was done by--[[User:AbsMechanik|AbsMechanik]] 09:33, 15 October 2010 (UTC))&lt;br /&gt;
A few key differences between the Linux &amp;amp; FreeBSD schedulers are outlined as below:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;ULE runs in O(1) time whereas CFS runs in O(log(n)) time&amp;lt;/li&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;ULE is based on run-queues whereas CFS utilizes a red-black tree&amp;lt;/li&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;Linux was the first OS to implement support for SMP&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
Hardware innovation has primarily been the driving force behind the evolution of the schedulers and with the ever increasing complexity of parallel &amp;amp; distributed systems and managing multi-threaded it is clear that even the current schedulers will undergo some radical changes to meet those new challenges.&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;br /&gt;
&lt;br /&gt;
3. McKusick, M. K. and Neville-Neil, G. V. 2004. Thread Scheduling in FreeBSD 5.2. Queue 2, 7 (Oct. 2004), 58-64. DOI= http://doi.acm.org/10.1145/1035594.1035622&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=4658</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=4658"/>
		<updated>2010-10-15T08:51:01Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Older Versions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with, all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]] 22:27, 13 October 2010 (UTC))&lt;br /&gt;
(Modified by [[User:Mike Preston|Mike Preston]] and [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf][http://www.sci.csueastbay.edu/~billard/cs4560/node6.html][http://www.articles.assyriancafe.com/documents/CPU_Scheduling.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (similar to &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; and/or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-tasking. It is a combination of &amp;lt;b&amp;gt; First-Come, First-Serve&amp;lt;/b&amp;gt;, &amp;lt;b&amp;gt;Round-Robin&amp;lt;/b&amp;gt; &amp;amp; &amp;lt;b&amp;gt; Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
There is no one &amp;quot;best&amp;quot; algorithm and most schedulers utilize a combination of the different algorithms, such as the Multi-Level Feedback Queue, which in one way or another was utilized in Win XP/Vista, Linux 2.5-2.6, FreeBSD, Mac OSX, NetBSD and Solaris. &amp;lt;br&amp;gt;One thing for certain is that as computer hardware increases in complexity, such as multiple core CPUs (parallelization), and with the advent of more powerful embedded/mobile devices, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The default BSD/FreeBSD scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The Linux scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 13:21, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from 4.3BSD which itself is a version of the UNIX scheduler [http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]. &lt;br /&gt;
In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. Like most traditional UNIX based systems, the BSD scheduler was designed to work on a single core computer system (with limited I/O) and handle relatively small numbers of processes. As a result, managing resources with an O(n) scheduler did not raise any performance issues. To ensure fairness, the scheduler would switch between processes every 0.1 second (100 milliseconds) in a round-robin format [http://www.thehackademy.net/madchat/ebooks/sched/FreeBSD/the_FreeBSD_process_scheduler.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity with the advent of multi-core CPUs and various new I/O devices, computer programs, naturally, increased in size and complexity to accommodate and manage the new hardware. With CPUs becoming more powerful (derived from &amp;lt;b&amp;gt;Moore&#039;s Law&amp;lt;/b&amp;gt; [http://www.intel.com/technology/mooreslaw/]), the time taken to complete a process decreased significantly. This additional complexity highlighted the problem of having an O(n) scheduler for managing processes, as more items were added to the scheduling algorithm, the performance decreased. With symmetric multiprocessing (&amp;lt;b&amp;gt;SMP&amp;lt;/b&amp;gt;) becoming inevitable (multi-core CPU&#039;s) a better scheduler was required. This was the driving force behind the creation of ULE for the FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
(modified and edited by --[[User:AbsMechanik|AbsMechanik]] 08:51, 15 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads, which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues, into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the original FreeBSD scheduler was not built to handle Symmetric Multiprocessing (SMP) or Symmetric Multithreading (SMT) on multi-core systems. The scheduler was still limited by an O(n) algorithm, which could not efficiently handle the loads required on ever increasingly powerful systems. &lt;br /&gt;
To allow FreeBSD to operate with more modern computer systems, it became clear that a new scheduler would be required, and thus, the ULE scheduler was created.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
(modified by--[[User:AbsMechanik|AbsMechanik]] 08:27, 15 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
ULE was first implemented as part of an &amp;quot;experimental&amp;quot; process (by Jeff Roberson) in FreeBSD v5.1, before being added to the FreeBSD v5.3 development cycle. It was designed with modern hardware and requirements in mind and had proper support for Symmetric Multi-Processing (SMP) (and HTT), Symmetric Multi-Threading (SMT) platforms and handle heavy workloads. Primarily being an event-driven scheduler, ULE utilized a double-queue mechanism (borrowed from Linux&#039;s &amp;lt;b&amp;gt;O(1) scheduler&amp;lt;/b&amp;gt;) for ensuring fairness. This mechanism is briefly outlined as follows[http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Process threads are assigned in 2 queues, &#039;current&#039; and &#039;next&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Each thread is either assigned to &#039;current&#039; or &#039;next&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Process execution first begins in the &#039;current&#039; queue (priority based)&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Once &#039;current&#039; is empty, the &#039;next&#039; and &#039;current&#039; queues are switched and the threads are executed in a similar manner (priority based)&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;All idle threads are stored in a third queue, &#039;idle&#039; and is run only when &#039;current&#039; and &#039;next&#039; are empty&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
It has been implemented as the default scheduler since v7.1 onwards. ULE works really well on both single or uni-processor environments as well as multi-core environments. It prevents unnecessary CPU migration, while making good use of CPU resources. However, 2 key practical problems arose due to the double-queue mechanism [http://jeffr-tech.livejournal.com/3729.html]:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;Implementing queues meant more memory usage and cache hits&amp;lt;/li&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;Having multiple threads on different queues caused problems for the rest of the system&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
This problem was resolved by implementing a circular queue, inspired by the calendar queue data structure. Since there is only one queue, there is little extra cpu overhead. There is more flexibility in deciding how much runtime each thread gets relative to those with a lower priority.&lt;br /&gt;
With these design approaches in mind, ULE is also slightly faster on uni-processor systems than the 4.4BSD.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
(edited &amp;amp; modified by --[[User:AbsMechanik|AbsMechanik]] 08:50, 15 October 2010 (UTC))&lt;br /&gt;
Unlike its competitor, FreeBSD, the Linux scheduler has undergone several different cycles of evolution [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html], always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions such as round robin, iterations, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler (to ensure fairness). Eventually, the speed issue was also tackled to ensure that the run times were not affected.&lt;br /&gt;
Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. With the advent of more complicated hardware (multi-core CPU&#039;s), Linux also faced new challenges, as experienced by BSD. Various techniques were employed to accommodate for these changes ranging from the inefficient O(n) scheduler to the CFS (current version) [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html].&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler that supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up its entire time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability, and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction, the CFS (Completely Fair Scheduler) took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model of how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness, the scheduler takes the left-most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable, the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature, which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;br /&gt;
&lt;br /&gt;
3. McKusick, M. K. and Neville-Neil, G. V. 2004. Thread Scheduling in FreeBSD 5.2. Queue 2, 7 (Oct. 2004), 58-64. DOI= http://doi.acm.org/10.1145/1035594.1035622&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=4655</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=4655"/>
		<updated>2010-10-15T08:50:09Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Overview &amp;amp; History */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with, all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]] 22:27, 13 October 2010 (UTC))&lt;br /&gt;
(Modified by [[User:Mike Preston|Mike Preston]] and [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf][http://www.sci.csueastbay.edu/~billard/cs4560/node6.html][http://www.articles.assyriancafe.com/documents/CPU_Scheduling.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (similar to &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; and/or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-tasking. It is a combination of &amp;lt;b&amp;gt; First-Come, First-Serve&amp;lt;/b&amp;gt;, &amp;lt;b&amp;gt;Round-Robin&amp;lt;/b&amp;gt; &amp;amp; &amp;lt;b&amp;gt; Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
There is no one &amp;quot;best&amp;quot; algorithm and most schedulers utilize a combination of the different algorithms, such as the Multi-Level Feedback Queue, which in one way or another was utilized in Win XP/Vista, Linux 2.5-2.6, FreeBSD, Mac OSX, NetBSD and Solaris. &amp;lt;br&amp;gt;One thing for certain is that as computer hardware increases in complexity, such as multiple core CPUs (parallelization), and with the advent of more powerful embedded/mobile devices, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The default BSD/FreeBSD scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The Linux scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 13:21, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from 4.3BSD which itself is a version of the UNIX scheduler [http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]. &lt;br /&gt;
In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. Like most traditional UNIX based systems, the BSD scheduler was designed to work on a single core computer system (with limited I/O) and handle relatively small numbers of processes. As a result, managing resources with an O(n) scheduler did not raise any performance issues. To ensure fairness, the scheduler would switch between processes every 0.1 second (100 milliseconds) in a round-robin format [http://www.thehackademy.net/madchat/ebooks/sched/FreeBSD/the_FreeBSD_process_scheduler.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity with the advent of multi-core CPUs and various new I/O devices, computer programs, naturally, increased in size and complexity to accommodate and manage the new hardware. With CPUs becoming more powerful (derived from &amp;lt;b&amp;gt;Moore&#039;s Law&amp;lt;/b&amp;gt; [http://www.intel.com/technology/mooreslaw/]), the time taken to complete a process decreased significantly. This additional complexity highlighted the problem of having an O(n) scheduler for managing processes, as more items were added to the scheduling algorithm, the performance decreased. With symmetric multiprocessing (&amp;lt;b&amp;gt;SMP&amp;lt;/b&amp;gt;) becoming inevitable (multi-core CPU&#039;s) a better scheduler was required. This was the driving force behind the creation of ULE for the FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads, which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues, into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the original FreeBSD scheduler was not built to handle Symmetric Multiprocessing (SMP) or Symmetric Multithreading (SMT) on multi-core systems. The scheduler was still limited by an O(n) algorithm, which could not efficiently handle the loads required on ever increasingly powerful systems. &lt;br /&gt;
To allow FreeBSD to operate with more modern computer systems, it became clear that a new scheduler would be required, and thus, the ULE scheduler was created.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
(modified by--[[User:AbsMechanik|AbsMechanik]] 08:27, 15 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
ULE was first implemented as part of an &amp;quot;experimental&amp;quot; process (by Jeff Roberson) in FreeBSD v5.1, before being added to the FreeBSD v5.3 development cycle. It was designed with modern hardware and requirements in mind and had proper support for Symmetric Multi-Processing (SMP) (and HTT), Symmetric Multi-Threading (SMT) platforms and handle heavy workloads. Primarily being an event-driven scheduler, ULE utilized a double-queue mechanism (borrowed from Linux&#039;s &amp;lt;b&amp;gt;O(1) scheduler&amp;lt;/b&amp;gt;) for ensuring fairness. This mechanism is briefly outlined as follows[http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Process threads are assigned in 2 queues, &#039;current&#039; and &#039;next&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Each thread is either assigned to &#039;current&#039; or &#039;next&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Process execution first begins in the &#039;current&#039; queue (priority based)&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Once &#039;current&#039; is empty, the &#039;next&#039; and &#039;current&#039; queues are switched and the threads are executed in a similar manner (priority based)&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;All idle threads are stored in a third queue, &#039;idle&#039; and is run only when &#039;current&#039; and &#039;next&#039; are empty&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
It has been implemented as the default scheduler since v7.1 onwards. ULE works really well on both single or uni-processor environments as well as multi-core environments. It prevents unnecessary CPU migration, while making good use of CPU resources. However, 2 key practical problems arose due to the double-queue mechanism [http://jeffr-tech.livejournal.com/3729.html]:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;Implementing queues meant more memory usage and cache hits&amp;lt;/li&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;Having multiple threads on different queues caused problems for the rest of the system&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
This problem was resolved by implementing a circular queue, inspired by the calendar queue data structure. Since there is only one queue, there is little extra cpu overhead. There is more flexibility in deciding how much runtime each thread gets relative to those with a lower priority.&lt;br /&gt;
With these design approaches in mind, ULE is also slightly faster on uni-processor systems than the 4.4BSD.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
(edited &amp;amp; modified by --[[User:AbsMechanik|AbsMechanik]] 08:50, 15 October 2010 (UTC))&lt;br /&gt;
Unlike its competitor, FreeBSD, the Linux scheduler has undergone several different cycles of evolution [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html], always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions such as round robin, iterations, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler (to ensure fairness). Eventually, the speed issue was also tackled to ensure that the run times were not affected.&lt;br /&gt;
Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. With the advent of more complicated hardware (multi-core CPU&#039;s), Linux also faced new challenges, as experienced by BSD. Various techniques were employed to accommodate for these changes ranging from the inefficient O(n) scheduler to the CFS (current version) [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html].&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler that supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up its entire time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability, and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction, the CFS (Completely Fair Scheduler) took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model of how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness, the scheduler takes the left-most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable, the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature, which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;br /&gt;
&lt;br /&gt;
3. McKusick, M. K. and Neville-Neil, G. V. 2004. Thread Scheduling in FreeBSD 5.2. Queue 2, 7 (Oct. 2004), 58-64. DOI= http://doi.acm.org/10.1145/1035594.1035622&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=4638</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=4638"/>
		<updated>2010-10-15T08:27:40Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Current Version */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with, all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]] 22:27, 13 October 2010 (UTC))&lt;br /&gt;
(Modified by [[User:Mike Preston|Mike Preston]] and [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf][http://www.sci.csueastbay.edu/~billard/cs4560/node6.html][http://www.articles.assyriancafe.com/documents/CPU_Scheduling.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (similar to &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; and/or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-tasking. It is a combination of &amp;lt;b&amp;gt; First-Come, First-Serve&amp;lt;/b&amp;gt;, &amp;lt;b&amp;gt;Round-Robin&amp;lt;/b&amp;gt; &amp;amp; &amp;lt;b&amp;gt; Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
There is no one &amp;quot;best&amp;quot; algorithm and most schedulers utilize a combination of the different algorithms, such as the Multi-Level Feedback Queue, which in one way or another was utilized in Win XP/Vista, Linux 2.5-2.6, FreeBSD, Mac OSX, NetBSD and Solaris. &amp;lt;br&amp;gt;One thing for certain is that as computer hardware increases in complexity, such as multiple core CPUs (parallelization), and with the advent of more powerful embedded/mobile devices, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The default BSD/FreeBSD scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The Linux scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 13:21, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from 4.3BSD which itself is a version of the UNIX scheduler [http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]. &lt;br /&gt;
In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. Like most traditional UNIX based systems, the BSD scheduler was designed to work on a single core computer system (with limited I/O) and handle relatively small numbers of processes. As a result, managing resources with an O(n) scheduler did not raise any performance issues. To ensure fairness, the scheduler would switch between processes every 0.1 second (100 milliseconds) in a round-robin format [http://www.thehackademy.net/madchat/ebooks/sched/FreeBSD/the_FreeBSD_process_scheduler.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity with the advent of multi-core CPUs and various new I/O devices, computer programs, naturally, increased in size and complexity to accommodate and manage the new hardware. With CPUs becoming more powerful (derived from &amp;lt;b&amp;gt;Moore&#039;s Law&amp;lt;/b&amp;gt; [http://www.intel.com/technology/mooreslaw/]), the time taken to complete a process decreased significantly. This additional complexity highlighted the problem of having an O(n) scheduler for managing processes, as more items were added to the scheduling algorithm, the performance decreased. With symmetric multiprocessing (&amp;lt;b&amp;gt;SMP&amp;lt;/b&amp;gt;) becoming inevitable (multi-core CPU&#039;s) a better scheduler was required. This was the driving force behind the creation of ULE for the FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads, which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues, into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the original FreeBSD scheduler was not built to handle Symmetric Multiprocessing (SMP) or Symmetric Multithreading (SMT) on multi-core systems. The scheduler was still limited by an O(n) algorithm, which could not efficiently handle the loads required on ever increasingly powerful systems. &lt;br /&gt;
To allow FreeBSD to operate with more modern computer systems, it became clear that a new scheduler would be required, and thus, the ULE scheduler was created.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
(modified by--[[User:AbsMechanik|AbsMechanik]] 08:27, 15 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
ULE was first implemented as part of an &amp;quot;experimental&amp;quot; process (by Jeff Roberson) in FreeBSD v5.1, before being added to the FreeBSD v5.3 development cycle. It was designed with modern hardware and requirements in mind and had proper support for Symmetric Multi-Processing (SMP) (and HTT), Symmetric Multi-Threading (SMT) platforms and handle heavy workloads. Primarily being an event-driven scheduler, ULE utilized a double-queue mechanism (borrowed from Linux&#039;s &amp;lt;b&amp;gt;O(1) scheduler&amp;lt;/b&amp;gt;) for ensuring fairness. This mechanism is briefly outlined as follows[http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Process threads are assigned in 2 queues, &#039;current&#039; and &#039;next&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Each thread is either assigned to &#039;current&#039; or &#039;next&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Process execution first begins in the &#039;current&#039; queue (priority based)&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Once &#039;current&#039; is empty, the &#039;next&#039; and &#039;current&#039; queues are switched and the threads are executed in a similar manner (priority based)&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;All idle threads are stored in a third queue, &#039;idle&#039; and is run only when &#039;current&#039; and &#039;next&#039; are empty&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
It has been implemented as the default scheduler since v7.1 onwards. ULE works really well on both single or uni-processor environments as well as multi-core environments. It prevents unnecessary CPU migration, while making good use of CPU resources. However, 2 key practical problems arose due to the double-queue mechanism [http://jeffr-tech.livejournal.com/3729.html]:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;Implementing queues meant more memory usage and cache hits&amp;lt;/li&amp;gt;&lt;br /&gt;
  &amp;lt;li&amp;gt;Having multiple threads on different queues caused problems for the rest of the system&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
This problem was resolved by implementing a circular queue, inspired by the calendar queue data structure. Since there is only one queue, there is little extra cpu overhead. There is more flexibility in deciding how much runtime each thread gets relative to those with a lower priority.&lt;br /&gt;
With these design approaches in mind, ULE is also slightly faster on uni-processor systems than the 4.4BSD.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robin, iterations, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]], modified by [[User:Sschnei1|Sschnei1]] )&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969 [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html](Stallings: 2009). The early versions had relatively inefficient schedulers, which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler that supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up its entire time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability, and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction, the CFS (Completely Fair Scheduler) took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model of how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness, the scheduler takes the left-most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable, the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature, which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;br /&gt;
&lt;br /&gt;
3. McKusick, M. K. and Neville-Neil, G. V. 2004. Thread Scheduling in FreeBSD 5.2. Queue 2, 7 (Oct. 2004), 58-64. DOI= http://doi.acm.org/10.1145/1035594.1035622&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3853</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3853"/>
		<updated>2010-10-14T17:07:42Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Current Version */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with, all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf][http://www.sci.csueastbay.edu/~billard/cs4560/node6.html][http://www.articles.assyriancafe.com/documents/CPU_Scheduling.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (similar to &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; and/or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-tasking. It is a combination of &amp;lt;b&amp;gt; First-Come, First-Serve&amp;lt;/b&amp;gt;, &amp;lt;b&amp;gt;Round-Robin&amp;lt;/b&amp;gt; &amp;amp; &amp;lt;b&amp;gt; Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
There is no one &amp;quot;best&amp;quot; algorithm and most schedulers utilize a combination of the different algorithms, such as the Multi-Level Feedback Queue, which in one way or another was utilized in Win XP/Vista, Linux 2.5-2.6, FreeBSD, Mac OSX, NetBSD and Solaris. &amp;lt;br&amp;gt;One thing for certain is that as computer hardware increases in complexity, such as multiple core CPUs (parallelization), and with the advent of more powerful embedded/mobile devices, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The default BSD/FreeBSD scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The Linux scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 13:21, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from 4.3BSD which itself is a version of the UNIX scheduler [http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]. &lt;br /&gt;
In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. Like most traditional UNIX based systems, the BSD scheduler was designed to work on a single core computer system (with limited I/O) and handle relatively small numbers of processes. As a result, managing resources with an O(n) scheduler did not raise any performance issues. To ensure fairness, the scheduler would switch between processes every 0.1 second (100 milliseconds) in a round-robin format [http://www.thehackademy.net/madchat/ebooks/sched/FreeBSD/the_FreeBSD_process_scheduler.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity with the advent of multi-core CPUs and various new I/O devices, computer programs, naturally, increased in size and complexity to accommodate and manage the new hardware. With CPUs becoming more powerful (derived from &amp;lt;b&amp;gt;Moore&#039;s Law&amp;lt;/b&amp;gt; [http://www.intel.com/technology/mooreslaw/]), the time taken to complete a process decreased significantly. This additional complexity highlighted the problem of having an O(n) scheduler for managing processes, as more items were added to the scheduling algorithm, the performance decreased. With symmetric multiprocessing (&amp;lt;b&amp;gt;SMP&amp;lt;/b&amp;gt;) becoming inevitable (multi-core CPU&#039;s) a better scheduler was required. This was the driving force behind the creation of ULE for the FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads, which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues, into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the original FreeBSD scheduler was not built to handle Symmetric Multiprocessing (SMP) or Symmetric Multithreading (SMT) on multi-core systems. The scheduler was still limited by an O(n) algorithm, which could not efficiently handle the loads required on ever increasingly powerful systems. &lt;br /&gt;
To allow FreeBSD to operate with more modern computer systems, it became clear that a new scheduler would be required, and thus, became the driving force behind the implementation of ULE.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
ULE was first implemented as part of an &amp;quot;experimental&amp;quot; process (by Jeff Roberson) in FreeBSD v5.1, before being added to the FreeBSD v5.3 development cycle. It was designed with modern hardware and requirements in mind and had proper support for Symmetric Multi-Processing (SMP) (and HTT), Symmetric Multi-Threading (SMT) platforms and handle heavy workloads. Primarily being an event-driven scheduler, ULE utilized a double-queue mechanism (borrowed from Linux&#039;s &amp;lt;b&amp;gt;O(1) scheduler&amp;lt;/b&amp;gt;) for ensuring fairness. This mechanism is briefly outlined as follows[http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Process threads are assigned in 2 queues, &#039;current&#039; and &#039;next&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Each thread is either assigned to &#039;current&#039; or &#039;next&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Process execution first begins in the &#039;current&#039; queue (priority based)&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;Once &#039;current&#039; is empty, the &#039;next&#039; and &#039;current&#039; queues are switched and the threads are executed in a similar manner (priority based)&amp;lt;/li&amp;gt;&lt;br /&gt;
 &amp;lt;li&amp;gt;All idle threads are stored in a third queue, &#039;idle&#039; and is run only when &#039;current&#039; and &#039;next&#039; are empty&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
It has been implemented as the default scheduler since v7.1 onwards. ULE works really well on both single or uni-processor environments as well as multi-core environments. It prevents unnecessary CPU migration, while making good use of CPU resources. However, 2 key practical problems arose due to the double-queue mechanism.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:17, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
(Austin - I added a reference to one of your sections as the current reference only went to wikipedia which the prof has kind of implied is not a good idea, I also added another one that was to a blog post as that was another thing the prof mentioned was not the best idea. I am hoping this will provide additional alidations to the sources.)&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 00:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robin, iterations, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]], modified by [[User:Sschnei1|Sschnei1]] )&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969 [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html](Stallings: 2009). The early versions had relatively inefficient schedulers, which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler that supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up its entire time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability, and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction, the CFS (Completely Fair Scheduler) took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model of how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness, the scheduler takes the left-most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable, the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature, which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;br /&gt;
&lt;br /&gt;
3. McKusick, M. K. and Neville-Neil, G. V. 2004. Thread Scheduling in FreeBSD 5.2. Queue 2, 7 (Oct. 2004), 58-64. DOI= http://doi.acm.org/10.1145/1035594.1035622&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3725</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3725"/>
		<updated>2010-10-14T13:42:37Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Older Versions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf][http://www.sci.csueastbay.edu/~billard/cs4560/node6.html][http://www.articles.assyriancafe.com/documents/CPU_Scheduling.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (similar to &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; and/or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-tasking. It is a combination of &amp;lt;b&amp;gt; First-Come, First-Serve&amp;lt;/b&amp;gt;, &amp;lt;b&amp;gt;Round-Robin&amp;lt;/b&amp;gt; &amp;amp; &amp;lt;b&amp;gt; Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
There is no one &amp;quot;best&amp;quot; algorithm and most schedulers utilize a combination of the different algorithms, such as the Multi-Level Feedback Queue which in one way or another was utilized in Win XP/Vista, Linux 2.5-2.6, FreeBSD, Mac OSX, NetBSD and Solaris. &amp;lt;br&amp;gt;One thing for certain is that as computer hardware increases in complexity, such as multiple core CPUs (parallelization), and with the advent of more powerful embedded/mobile devices, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The default BSD/FreeBSD scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The Linux scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 13:21, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from 4.3BSD which itself is a version of the UNIX scheduler [http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]. &lt;br /&gt;
In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. Like most traditional UNIX based systems, the BSD scheduler was designed to work on a single core computer system (with limited I/O) and handle relatively small numbers of processes. As a result, managing resources with an O(n) scheduler did not raise any performance issues. To ensure fairness, the scheduler would switch between processes every 0.1 second (100 milliseconds) in a round-robin format [http://www.thehackademy.net/madchat/ebooks/sched/FreeBSD/the_FreeBSD_process_scheduler.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity with the advent of multi-core CPU&#039;s and various new I/O devices, computer programs, naturally, increased in size and complexity to accommodate and manage the new hardwares. With CPU&#039;s becoming more powerful (derived from &amp;lt;b&amp;gt;Moore&#039;s Law&amp;lt;/b&amp;gt; [http://www.intel.com/technology/mooreslaw/]), the time taken to complete a process decreased significantly. The additional complexity highlighted the problem of having a O(n) scheduler for managing processes, as more items were added to the scheduling algorithm, the performance decreases. With symmetric multiprocessing (&amp;lt;b&amp;gt;SMP&amp;lt;/b&amp;gt;) becoming inevitable (multi-core CPU&#039;s) a better scheduler was required. This was the driving force behind the creation of ULE for the FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the original FreeBSD scheduler was not built to handle Symmetric Multiprocessing (SMP) or Symmetric Multithreading (SMT) on multi-core systems. The sheduler was still limited by a O(n) algorithm which could not efficiently handle the loads required on ever increasingly powerful systems. &lt;br /&gt;
To allow FreeBSD to operate with more modern computer systems it became clear that a new scheduler would be required and, thus, became the driving force behind the implementation of ULE.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
In order to effectively manage multi-core computer systems, FreeBSD needed a scheduler with an algorithm which would execute in constant time regardless of the number of threads involved. The ULE scheduler was designed for this purpose. It is of interest to note that throughout the course of the BSD/FreeBSD scheduler evolution, each iteration has just been an improvement on existing scheduler technologies. Although each version was designed to provide support for some current reality of computing, like multi-core systems, the evolution was out of necessity and not due to a desire to re-evaluate how the current version accomplished its tasks.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;slow&amp;quot; evolution of the FreeBSD scheduler becomes even more evident when comparing it to the Linux scheduler which has evolved through a series of attempts to provide alternative ways to solve scheduling tasks. From dynamic time slices, to various data structure implimentations, and even various ways of describing prioritiy levels (see: &amp;quot;nice&amp;quot; levels), the Linux scheduler advancement has occurred through a series of drastic changes. In comparison, the FreeBSD scheduler has been changed only when the current version was no longer able to meet the needs of the existing computing climate.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:17, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
(Austin - I added a reference to one of your sections as the current reference only went to wikipedia which the prof has kind of implied is not a good idea, I also added another one that was to a blog post as that was another thing the prof mentioned was not the best idea. I am hoping this will provide additional alidations to the sources.)&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 00:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]], modified by [[User:Sschnei1|Sschnei1]] )&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969 [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html](Stallings: 2009). The early versions had relatively inefficient schedulers which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3721</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3721"/>
		<updated>2010-10-14T13:33:33Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Overview &amp;amp; History */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf][http://www.sci.csueastbay.edu/~billard/cs4560/node6.html][http://www.articles.assyriancafe.com/documents/CPU_Scheduling.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (similar to &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; and/or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-tasking. It is a combination of &amp;lt;b&amp;gt; First-Come, First-Serve&amp;lt;/b&amp;gt;, &amp;lt;b&amp;gt;Round-Robin&amp;lt;/b&amp;gt; &amp;amp; &amp;lt;b&amp;gt; Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
There is no one &amp;quot;best&amp;quot; algorithm and most schedulers utilize a combination of the different algorithms, such as the Multi-Level Feedback Queue which in one way or another was utilized in Win XP/Vista, Linux 2.5-2.6, FreeBSD, Mac OSX, NetBSD and Solaris. &amp;lt;br&amp;gt;One thing for certain is that as computer hardware increases in complexity, such as multiple core CPUs (parallelization), and with the advent of more powerful embedded/mobile devices, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The default BSD/FreeBSD scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The Linux scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 13:21, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from 4.3BSD which itself is a version of the UNIX scheduler [http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]. &lt;br /&gt;
In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. Like most traditional UNIX based systems, the BSD scheduler was designed to work on a single core computer system (with limited I/O) and handle relatively small numbers of processes. As a result, managing resources with an O(n) scheduler did not raise any performance issues. To ensure fairness, the scheduler would switch between processes every 0.1 second (100 milliseconds) in a round-robin format [http://www.thehackademy.net/madchat/ebooks/sched/FreeBSD/the_FreeBSD_process_scheduler.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity with the advent of multi-core CPU&#039;s and various new I/O devices, computer programs, naturally, increased in size and complexity to accommodate and manage the new hardwares. With CPU&#039;s becoming more powerful (derived from &amp;lt;b&amp;gt;Moore&#039;s Law&amp;lt;/b&amp;gt; [http://www.intel.com/technology/mooreslaw/]), the time taken to complete a process decreased significantly. The additional complexity highlighted the problem of having a O(n) scheduler for managing processes, as more items were added to the scheduling algorithm, the performance decreases. With symmetric multiprocessing (&amp;lt;b&amp;gt;SMP&amp;lt;/b&amp;gt;) becoming inevitable (multi-core CPU&#039;s) a better scheduler was required. This was the driving force behind the creation of ULE for the FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the origianl FreeBSD scheduler was not built to handle Symmetric Multiprocessing or Symmetric Multithreading on multi-core systems. The sheduler was still limited by a O(n) algorithm which could not efficiently handle the loads required on ever increasingly powerful systems. To allow FreeBSD to operate with more modern computer systems, a new scheduler, the ULE scheduler, was necessary.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
In order to effectively manage multi-core computer systems, FreeBSD needed a scheduler with an algorithm which would execute in constant time regardless of the number of threads involved. The ULE scheduler was designed for this purpose. It is of interest to note that throughout the course of the BSD/FreeBSD scheduler evolution, each iteration has just been an improvement on existing scheduler technologies. Although each version was designed to provide support for some current reality of computing, like multi-core systems, the evolution was out of necessity and not due to a desire to re-evaluate how the current version accomplished its tasks.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;slow&amp;quot; evolution of the FreeBSD scheduler becomes even more evident when comparing it to the Linux scheduler which has evolved through a series of attempts to provide alternative ways to solve scheduling tasks. From dynamic time slices, to various data structure implimentations, and even various ways of describing prioritiy levels (see: &amp;quot;nice&amp;quot; levels), the Linux scheduler advancement has occurred through a series of drastic changes. In comparison, the FreeBSD scheduler has been changed only when the current version was no longer able to meet the needs of the existing computing climate.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:17, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
(Austin - I added a reference to one of your sections as the current reference only went to wikipedia which the prof has kind of implied is not a good idea, I also added another one that was to a blog post as that was another thing the prof mentioned was not the best idea. I am hoping this will provide additional alidations to the sources.)&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 00:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]], modified by [[User:Sschnei1|Sschnei1]] )&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969 [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html](Stallings: 2009). The early versions had relatively inefficient schedulers which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3715</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3715"/>
		<updated>2010-10-14T13:26:22Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Overview &amp;amp; History */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf][http://www.sci.csueastbay.edu/~billard/cs4560/node6.html][http://www.articles.assyriancafe.com/documents/CPU_Scheduling.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (similar to &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; and/or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-tasking. It is a combination of &amp;lt;b&amp;gt; First-Come, First-Serve&amp;lt;/b&amp;gt;, &amp;lt;b&amp;gt;Round-Robin&amp;lt;/b&amp;gt; &amp;amp; &amp;lt;b&amp;gt; Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
There is no one &amp;quot;best&amp;quot; algorithm and most schedulers utilize a combination of the different algorithms, such as the Multi-Level Feedback Queue which in one way or another was utilized in Win XP/Vista, Linux 2.5-2.6, FreeBSD, Mac OSX, NetBSD and Solaris. &amp;lt;br&amp;gt;One thing for certain is that as computer hardware increases in complexity, such as multiple core CPUs (parallelization), and with the advent of more powerful embedded/mobile devices, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The default BSD/FreeBSD scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The Linux scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 13:21, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from 4.3BSD which itself is a version of the UNIX scheduler [http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]. &lt;br /&gt;
[[Image:UnixFamilyTree.png|thumb|alt=A family tree of Unix based systems.|The Unix Family Tree]]In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. Like most traditional UNIX based systems, the BSD scheduler was designed to work on a single core computer system (with limited I/O) and handle relatively small numbers of processes. As a result, managing resources with an O(n) scheduler did not raise any performance issues. To ensure fairness, the scheduler would switch between processes every 0.1 second (100 milliseconds) in a round-robin format [http://www.thehackademy.net/madchat/ebooks/sched/FreeBSD/the_FreeBSD_process_scheduler.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity with the advent of multi-core CPU&#039;s and various new I/O devices, computer programs, naturally, increased in size and complexity to accommodate and manage the new hardwares. With CPU&#039;s becoming more powerful (derived from &amp;lt;b&amp;gt;Moore&#039;s Law&amp;lt;/b&amp;gt; [http://www.intel.com/technology/mooreslaw/]), the time taken to complete a process decreased significantly. The additional complexity highlighted the problem of having a O(n) scheduler for managing processes, as more items were added to the scheduling algorithm, the performance decreases. With symmetric multiprocessing (&amp;lt;b&amp;gt;SMP&amp;lt;/b&amp;gt;) becoming inevitable (multi-core CPU&#039;s) a better scheduler was required. This was the driving force behind the creation of ULE for the FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the origianl FreeBSD scheduler was not built to handle Symmetric Multiprocessing or Symmetric Multithreading on multi-core systems. The sheduler was still limited by a O(n) algorithm which could not efficiently handle the loads required on ever increasingly powerful systems. To allow FreeBSD to operate with more modern computer systems, a new scheduler, the ULE scheduler, was necessary.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
In order to effectively manage multi-core computer systems, FreeBSD needed a scheduler with an algorithm which would execute in constant time regardless of the number of threads involved. The ULE scheduler was designed for this purpose. It is of interest to note that throughout the course of the BSD/FreeBSD scheduler evolution, each iteration has just been an improvement on existing scheduler technologies. Although each version was designed to provide support for some current reality of computing, like multi-core systems, the evolution was out of necessity and not due to a desire to re-evaluate how the current version accomplished its tasks.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;slow&amp;quot; evolution of the FreeBSD scheduler becomes even more evident when comparing it to the Linux scheduler which has evolved through a series of attempts to provide alternative ways to solve scheduling tasks. From dynamic time slices, to various data structure implimentations, and even various ways of describing prioritiy levels (see: &amp;quot;nice&amp;quot; levels), the Linux scheduler advancement has occurred through a series of drastic changes. In comparison, the FreeBSD scheduler has been changed only when the current version was no longer able to meet the needs of the existing computing climate.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:17, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
(Austin - I added a reference to one of your sections as the current reference only went to wikipedia which the prof has kind of implied is not a good idea, I also added another one that was to a blog post as that was another thing the prof mentioned was not the best idea. I am hoping this will provide additional alidations to the sources.)&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 00:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]], modified by [[User:Sschnei1|Sschnei1]] )&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969 [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html](Stallings: 2009). The early versions had relatively inefficient schedulers which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3714</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3714"/>
		<updated>2010-10-14T13:25:27Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Overview &amp;amp; History */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf][http://www.sci.csueastbay.edu/~billard/cs4560/node6.html][http://www.articles.assyriancafe.com/documents/CPU_Scheduling.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (similar to &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; and/or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-tasking. It is a combination of &amp;lt;b&amp;gt; First-Come, First-Serve&amp;lt;/b&amp;gt;, &amp;lt;b&amp;gt;Round-Robin&amp;lt;/b&amp;gt; &amp;amp; &amp;lt;b&amp;gt; Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
There is no one &amp;quot;best&amp;quot; algorithm and most schedulers utilize a combination of the different algorithms, such as the Multi-Level Feedback Queue which in one way or another was utilized in Win XP/Vista, Linux 2.5-2.6, FreeBSD, Mac OSX, NetBSD and Solaris. &amp;lt;br&amp;gt;One thing for certain is that as computer hardware increases in complexity, such as multiple core CPUs (parallelization), and with the advent of more powerful embedded/mobile devices, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The default BSD/FreeBSD scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The Linux scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 13:21, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from 4.3BSD which itself is a version of the UNIX scheduler [http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]. &lt;br /&gt;
[[Image:UnixFamilyTree.png|thumb|alt=A family tree of Unix based systems.|The Unix Family Tree]]In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. Like most traditional UNIX based systems, the BSD scheduler was designed to work on a single core computer system (with limited I/O) and handle relatively small numbers of processes. As a result, managing resources with an O(n) scheduler did not raise any performance issues. To ensure fairness, the scheduler would switch between processes every 0.1 second (100 milliseconds) in a round-robin format [http://www.thehackademy.net/madchat/ebooks/sched/FreeBSD/the_FreeBSD_process_scheduler.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity with the advent of multi-core CPU&#039;s and various new I/O devices, computer programs, naturally, increased in size and complexity to accommodate and manage the new hardwares. With CPU&#039;s becoming more powerful (derived from &amp;lt;b&amp;gt;Moore&#039;s Law&amp;lt;/b&amp;gt; [http://www.intel.com/technology/mooreslaw/]), the time taken to complete a process decreased significantly. The additional complexity highlighted the problem of having a O(n) scheduler for managing processes, as more items were added to the scheduling algorithm, the performance decreases. With symmetric multiprocessing (&amp;lt;b&amp;gt;SMP&amp;lt;/b&amp;gt;) becoming inevitable a better scheduler was required. This was the driving force behind the creation of ULE.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the origianl FreeBSD scheduler was not built to handle Symmetric Multiprocessing or Symmetric Multithreading on multi-core systems. The sheduler was still limited by a O(n) algorithm which could not efficiently handle the loads required on ever increasingly powerful systems. To allow FreeBSD to operate with more modern computer systems, a new scheduler, the ULE scheduler, was necessary.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
In order to effectively manage multi-core computer systems, FreeBSD needed a scheduler with an algorithm which would execute in constant time regardless of the number of threads involved. The ULE scheduler was designed for this purpose. It is of interest to note that throughout the course of the BSD/FreeBSD scheduler evolution, each iteration has just been an improvement on existing scheduler technologies. Although each version was designed to provide support for some current reality of computing, like multi-core systems, the evolution was out of necessity and not due to a desire to re-evaluate how the current version accomplished its tasks.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;slow&amp;quot; evolution of the FreeBSD scheduler becomes even more evident when comparing it to the Linux scheduler which has evolved through a series of attempts to provide alternative ways to solve scheduling tasks. From dynamic time slices, to various data structure implimentations, and even various ways of describing prioritiy levels (see: &amp;quot;nice&amp;quot; levels), the Linux scheduler advancement has occurred through a series of drastic changes. In comparison, the FreeBSD scheduler has been changed only when the current version was no longer able to meet the needs of the existing computing climate.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:17, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
(Austin - I added a reference to one of your sections as the current reference only went to wikipedia which the prof has kind of implied is not a good idea, I also added another one that was to a blog post as that was another thing the prof mentioned was not the best idea. I am hoping this will provide additional alidations to the sources.)&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 00:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]], modified by [[User:Sschnei1|Sschnei1]] )&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969 [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html](Stallings: 2009). The early versions had relatively inefficient schedulers which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3711</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3711"/>
		<updated>2010-10-14T13:21:45Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Overview &amp;amp; History */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf][http://www.sci.csueastbay.edu/~billard/cs4560/node6.html][http://www.articles.assyriancafe.com/documents/CPU_Scheduling.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (similar to &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; and/or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-tasking. It is a combination of &amp;lt;b&amp;gt; First-Come, First-Serve&amp;lt;/b&amp;gt;, &amp;lt;b&amp;gt;Round-Robin&amp;lt;/b&amp;gt; &amp;amp; &amp;lt;b&amp;gt; Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
There is no one &amp;quot;best&amp;quot; algorithm and most schedulers utilize a combination of the different algorithms, such as the Multi-Level Feedback Queue which in one way or another was utilized in Win XP/Vista, Linux 2.5-2.6, FreeBSD, Mac OSX, NetBSD and Solaris. &amp;lt;br&amp;gt;One thing for certain is that as computer hardware increases in complexity, such as multiple core CPUs (parallelization), and with the advent of more powerful embedded/mobile devices, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The default BSD/FreeBSD scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The Linux scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 13:21, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from 4.3BSD which itself is a version of the UNIX scheduler [http://dspace.hil.unb.ca:8080/bitstream/handle/1882/100/roberson.pdf?sequence=1]. &lt;br /&gt;
[[Image:UnixFamilyTree.png|thumb|alt=A family tree of Unix based systems.|The Unix Family Tree]]In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. Like most traditional UNIX based systems, the BSD scheduler was designed to work on a single core computer system (with limited I/O) and handle relatively small numbers of processes. As a result, managing resources with a scheduler which operates in O(n) time did not raise any performance issues for BSD. To ensure fairness, the scheduler would switch between processes every 0.1 seconds in a round-robin format [http://www.thehackademy.net/madchat/ebooks/sched/FreeBSD/the_FreeBSD_process_scheduler.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity with the advent of multi-core CPU&#039;s and various new I/O devices, computer programs, naturally, increased in size and complexity to accommodate and manage the new hardwares. With CPU&#039;s becoming more powerful (derived from &amp;lt;b&amp;gt;Moore&#039;s Law&amp;lt;/b&amp;gt; [http://www.intel.com/technology/mooreslaw/]), the time taken to complete a process decreased significantly. The additional complexity highlighted the problem of having a O(n) scheduler for managing processes, as more items were added to the scheduling algorithm, the performance decreases. With symmetric multiprocessing (&amp;lt;b&amp;gt;SMP&amp;lt;/b&amp;gt;) becoming inevitable a better scheduler was required. This was the driving force behind the creation of ULE.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the origianl FreeBSD scheduler was not built to handle Symmetric Multiprocessing or Symmetric Multithreading on multi-core systems. The sheduler was still limited by a O(n) algorithm which could not efficiently handle the loads required on ever increasingly powerful systems. To allow FreeBSD to operate with more modern computer systems, a new scheduler, the ULE scheduler, was necessary.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
In order to effectively manage multi-core computer systems, FreeBSD needed a scheduler with an algorithm which would execute in constant time regardless of the number of threads involved. The ULE scheduler was designed for this purpose. It is of interest to note that throughout the course of the BSD/FreeBSD scheduler evolution, each iteration has just been an improvement on existing scheduler technologies. Although each version was designed to provide support for some current reality of computing, like multi-core systems, the evolution was out of necessity and not due to a desire to re-evaluate how the current version accomplished its tasks.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;slow&amp;quot; evolution of the FreeBSD scheduler becomes even more evident when comparing it to the Linux scheduler which has evolved through a series of attempts to provide alternative ways to solve scheduling tasks. From dynamic time slices, to various data structure implimentations, and even various ways of describing prioritiy levels (see: &amp;quot;nice&amp;quot; levels), the Linux scheduler advancement has occurred through a series of drastic changes. In comparison, the FreeBSD scheduler has been changed only when the current version was no longer able to meet the needs of the existing computing climate.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:17, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
(Austin - I added a reference to one of your sections as the current reference only went to wikipedia which the prof has kind of implied is not a good idea, I also added another one that was to a blog post as that was another thing the prof mentioned was not the best idea. I am hoping this will provide additional alidations to the sources.)&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 00:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]], modified by [[User:Sschnei1|Sschnei1]] )&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969 [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html](Stallings: 2009). The early versions had relatively inefficient schedulers which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:UnixFamilyTree.png&amp;diff=3700</id>
		<title>File:UnixFamilyTree.png</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=File:UnixFamilyTree.png&amp;diff=3700"/>
		<updated>2010-10-14T12:56:08Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3585</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3585"/>
		<updated>2010-10-14T03:41:15Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Answer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf][http://www.sci.csueastbay.edu/~billard/cs4560/node6.html][http://www.articles.assyriancafe.com/documents/CPU_Scheduling.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (similar to &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; and/or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-tasking. It is a combination of &amp;lt;b&amp;gt; First-Come, First-Serve&amp;lt;/b&amp;gt;, &amp;lt;b&amp;gt;Round-Robin&amp;lt;/b&amp;gt; &amp;amp; &amp;lt;b&amp;gt; Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
There is no one &amp;quot;best&amp;quot; algorithm and most schedulers utilize a combination of the different algorithms, such as the Multi-Level Feedback Queue which in one way or another was utilized in Win XP/Vista, Linux 2.5-2.6, FreeBSD, Mac OSX, NetBSD and Solaris. &amp;lt;br&amp;gt;One thing for certain is that as computer hardware increases in complexity, such as multiple core CPUs (parallelization), and with the advent of more powerful embedded/mobile devices, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The default BSD/FreeBSD scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&#039;&#039;&#039;The Linux scheduler&#039;&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from BSD which itself is a version of the UNIX scheduler. In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. The BSD scheduler was designed to work on a single core computer system and handle relatively small numbers of processes. As a result, managing resources with a scheduler which operates in O(n) time did not raise any performance issues for BSD. To ensure fairness, the scheduler would switch between processes every 0.1 seconds in a round-robin format [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity, specifically the addition of multiple processors, computer programs increased in size as well. Although the additional complexity increased what could be accomplished with a computer, it also highlighted the problem of having a O(n) scheduler; as more items are added to the scheduling algorithm, performance decreases. With symmetric multiprocessing becoming inevitable a better scheduler was required. This was the driving force behind the creation of FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the origianl FreeBSD scheduler was not built to handle Symmetric Multiprocessing or Symmetric Multithreading on multi-core systems. The sheduler was still limited by a O(n) algorithm which could not efficiently handle the loads required on ever increasingly powerful systems. To allow FreeBSD to operate with more modern computer systems, a new scheduler, the ULE scheduler, was necessary.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
In order to effectively manage multi-core computer systems, FreeBSD needed a scheduler with an algorithm which would execute in constant time regardless of the number of threads involved. The ULE scheduler was designed for this purpose. It is of interest to note that throughout the course of the BSD/FreeBSD scheduler evolution, each iteration has just been an improvement on existing scheduler technologies. Although each version was designed to provide support for some current reality of computing, like multi-core systems, the evolution was out of necessity and not due to a desire to re-evaluate how the current version accomplished its tasks.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;slow&amp;quot; evolution of the FreeBSD scheduler becomes even more evident when comparing it to the Linux scheduler which has evolved through a series of attempts to provide alternative ways to solve scheduling tasks. From dynamic time slices, to various data structure implimentations, and even various ways of describing prioritiy levels (see: &amp;quot;nice&amp;quot; levels), the Linux scheduler advancement has occurred through a series of drastic changes. In comparison, the FreeBSD scheduler has been changed only when the current version was no longer able to meet the needs of the existing computing climate.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:17, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
(Austin - I added a reference to one of your sections as the current reference only went to wikipedia which the prof has kind of implied is not a good idea, I also added another one that was to a blog post as that was another thing the prof mentioned was not the best idea. I am hoping this will provide additional alidations to the sources.)&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 00:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]], modified by [[User:Sschnei1|Sschnei1]] )&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969 [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html](Stallings: 2009). The early versions had relatively inefficient schedulers which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3583</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3583"/>
		<updated>2010-10-14T03:36:48Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Answer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf][http://www.sci.csueastbay.edu/~billard/cs4560/node6.html][http://www.articles.assyriancafe.com/documents/CPU_Scheduling.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (similar to &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; and/or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-tasking. It is a combination of &amp;lt;b&amp;gt; First-Come, First-Serve&amp;lt;/b&amp;gt;, &amp;lt;b&amp;gt;Round-Robin&amp;lt;/b&amp;gt; &amp;amp; &amp;lt;b&amp;gt; Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;There is no one &amp;quot;best&amp;quot; algorithm and most schedulers utilize a combination of the different algorithms, such as the Multi-Level Feedback Queue which in one way or another was utilized in Win XP/Vista, Linux 2.5-2.6, FreeBSD, Mac OSX, NetBSD and Solaris. &amp;lt;br&amp;gt;One thing for certain is that as computer hardware increases in complexity, such as multiple core CPUs (parallelization), and with the advent of more powerful embedded/mobile devices, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers, namely the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from BSD which itself is a version of the UNIX scheduler. In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. The BSD scheduler was designed to work on a single core computer system and handle relatively small numbers of processes. As a result, managing resources with a scheduler which operates in O(n) time did not raise any performance issues for BSD. To ensure fairness, the scheduler would switch between processes every 0.1 seconds in a round-robin format [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity, specifically the addition of multiple processors, computer programs increased in size as well. Although the additional complexity increased what could be accomplished with a computer, it also highlighted the problem of having a O(n) scheduler; as more items are added to the scheduling algorithm, performance decreases. With symmetric multiprocessing becoming inevitable a better scheduler was required. This was the driving force behind the creation of FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the origianl FreeBSD scheduler was not built to handle Symmetric Multiprocessing or Symmetric Multithreading on multi-core systems. The sheduler was still limited by a O(n) algorithm which could not efficiently handle the loads required on ever increasingly powerful systems. To allow FreeBSD to operate with more modern computer systems, a new scheduler, the ULE scheduler, was necessary.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
In order to effectively manage multi-core computer systems, FreeBSD needed a scheduler with an algorithm which would execute in constant time regardless of the number of threads involved. The ULE scheduler was designed for this purpose. It is of interest to note that throughout the course of the BSD/FreeBSD scheduler evolution, each iteration has just been an improvement on existing scheduler technologies. Although each version was designed to provide support for some current reality of computing, like multi-core systems, the evolution was out of necessity and not due to a desire to re-evaluate how the current version accomplished its tasks.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;slow&amp;quot; evolution of the FreeBSD scheduler becomes even more evident when comparing it to the Linux scheduler which has evolved through a series of attempts to provide alternative ways to solve scheduling tasks. From dynamic time slices, to various data structure implimentations, and even various ways of describing prioritiy levels (see: &amp;quot;nice&amp;quot; levels), the Linux scheduler advancement has occurred through a series of drastic changes. In comparison, the FreeBSD scheduler has been changed only when the current version was no longer able to meet the needs of the existing computing climate.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:17, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
(Austin - I added a reference to one of your sections as the current reference only went to wikipedia which the prof has kind of implied is not a good idea, I also added another one that was to a blog post as that was another thing the prof mentioned was not the best idea. I am hoping this will provide additional alidations to the sources.)&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 00:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]], modified by [[User:Sschnei1|Sschnei1]] )&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969 [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html](Stallings: 2009). The early versions had relatively inefficient schedulers which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3580</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3580"/>
		<updated>2010-10-14T03:19:43Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Answer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf][http://www.sci.csueastbay.edu/~billard/cs4560/node6.html][http://www.articles.assyriancafe.com/documents/CPU_Scheduling.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (similar to &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; and/or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-tasking. It is a combination of &amp;lt;b&amp;gt; First-Come, First-Serve&amp;lt;/b&amp;gt;, &amp;lt;b&amp;gt;Round-Robin&amp;lt;/b&amp;gt; &amp;amp; &amp;lt;b&amp;gt; Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt; &amp;lt;br&amp;gt;There is no one &amp;quot;best&amp;quot; algorithm and most schedulers utilize a combination of the different algorithms, such as the Multi-Level Feedback Queue which is utilized in Win XP/Vista, Linux 2.5-2.6, FreeBSD, Mac OSX, NetBSD and Solaris. As computer hardware has increases in complexity with the advent of multiple core CPUs (parallelization), schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from BSD which itself is a version of the UNIX scheduler. In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. The BSD scheduler was designed to work on a single core computer system and handle relatively small numbers of processes. As a result, managing resources with a scheduler which operates in O(n) time did not raise any performance issues for BSD. To ensure fairness, the scheduler would switch between processes every 0.1 seconds in a round-robin format [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity, specifically the addition of multiple processors, computer programs increased in size as well. Although the additional complexity increased what could be accomplished with a computer, it also highlighted the problem of having a O(n) scheduler; as more items are added to the scheduling algorithm, performance decreases. With symmetric multiprocessing becoming inevitable a better scheduler was required. This was the driving force behind the creation of FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the origianl FreeBSD scheduler was not built to handle Symmetric Multiprocessing or Symmetric Multithreading on multi-core systems. The sheduler was still limited by a O(n) algorithm which could not efficiently handle the loads required on ever increasingly powerful systems. To allow FreeBSD to operate with more modern computer systems, a new scheduler, the ULE scheduler, was necessary.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
In order to effectively manage multi-core computer systems, FreeBSD needed a scheduler with an algorithm which would execute in constant time regardless of the number of threads involved. The ULE scheduler was designed for this purpose. It is of interest to note that throughout the course of the BSD/FreeBSD scheduler evolution, each iteration has just been an improvement on existing scheduler technologies. Although each version was designed to provide support for some current reality of computing, like multi-core systems, the evolution was out of necessity and not due to a desire to re-evaluate how the current version accomplished its tasks.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;slow&amp;quot; evolution of the FreeBSD scheduler becomes even more evident when comparing it to the Linux scheduler which has evolved through a series of attempts to provide alternative ways to solve scheduling tasks. From dynamic time slices, to various data structure implimentations, and even various ways of describing prioritiy levels (see: &amp;quot;nice&amp;quot; levels), the Linux scheduler advancement has occurred through a series of drastic changes. In comparison, the FreeBSD scheduler has been changed only when the current version was no longer able to meet the needs of the existing computing climate.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:17, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
(Austin - I added a reference to one of your sections as the current reference only went to wikipedia which the prof has kind of implied is not a good idea, I also added another one that was to a blog post as that was another thing the prof mentioned was not the best idea. I am hoping this will provide additional alidations to the sources.)&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 00:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]], modified by [[User:Sschnei1|Sschnei1]] )&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969 [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html](Stallings: 2009). The early versions had relatively inefficient schedulers which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3579</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3579"/>
		<updated>2010-10-14T03:17:20Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Answer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf][http://www.sci.csueastbay.edu/~billard/cs4560/node6.html][http://www.articles.assyriancafe.com/documents/CPU_Scheduling.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (similar to &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; and/or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-tasking. It is a combination of &amp;lt;b&amp;gt; First-Come, First-Serve&amp;lt;/b&amp;gt;, &amp;lt;b&amp;gt;Round-Robin&amp;lt;/b&amp;gt; &amp;amp; &amp;lt;b&amp;gt; Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt; &amp;lt;br&amp;gt;There is no &amp;quot;one&amp;quot; best algorithm and most schedulers utilize a combination of the different algorithms, such as the Multi-Level Feedback Queue (which is utilized in Win XP/Vista). As computer hardware has increases in complexity with the advent of multiple core CPUs (parallelization), schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from BSD which itself is a version of the UNIX scheduler. In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. The BSD scheduler was designed to work on a single core computer system and handle relatively small numbers of processes. As a result, managing resources with a scheduler which operates in O(n) time did not raise any performance issues for BSD. To ensure fairness, the scheduler would switch between processes every 0.1 seconds in a round-robin format [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity, specifically the addition of multiple processors, computer programs increased in size as well. Although the additional complexity increased what could be accomplished with a computer, it also highlighted the problem of having a O(n) scheduler; as more items are added to the scheduling algorithm, performance decreases. With symmetric multiprocessing becoming inevitable a better scheduler was required. This was the driving force behind the creation of FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the origianl FreeBSD scheduler was not built to handle Symmetric Multiprocessing or Symmetric Multithreading on multi-core systems. The sheduler was still limited by a O(n) algorithm which could not efficiently handle the loads required on ever increasingly powerful systems. To allow FreeBSD to operate with more modern computer systems, a new scheduler, the ULE scheduler, was necessary.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
In order to effectively manage multi-core computer systems, FreeBSD needed a scheduler with an algorithm which would execute in constant time regardless of the number of threads involved. The ULE scheduler was designed for this purpose. It is of interest to note that throughout the course of the BSD/FreeBSD scheduler evolution, each iteration has just been an improvement on existing scheduler technologies. Although each version was designed to provide support for some current reality of computing, like multi-core systems, the evolution was out of necessity and not due to a desire to re-evaluate how the current version accomplished its tasks.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;slow&amp;quot; evolution of the FreeBSD scheduler becomes even more evident when comparing it to the Linux scheduler which has evolved through a series of attempts to provide alternative ways to solve scheduling tasks. From dynamic time slices, to various data structure implimentations, and even various ways of describing prioritiy levels (see: &amp;quot;nice&amp;quot; levels), the Linux scheduler advancement has occurred through a series of drastic changes. In comparison, the FreeBSD scheduler has been changed only when the current version was no longer able to meet the needs of the existing computing climate.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:17, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
(Austin - I added a reference to one of your sections as the current reference only went to wikipedia which the prof has kind of implied is not a good idea, I also added another one that was to a blog post as that was another thing the prof mentioned was not the best idea. I am hoping this will provide additional alidations to the sources.)&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 00:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]], modified by [[User:Sschnei1|Sschnei1]] )&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969 [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html](Stallings: 2009). The early versions had relatively inefficient schedulers which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3576</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3576"/>
		<updated>2010-10-14T03:09:56Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Answer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf][http://www.sci.csueastbay.edu/~billard/cs4560/node6.html][http://www.articles.assyriancafe.com/documents/CPU_Scheduling.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (similar to &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; and/or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-taksing. This method is also similar to &amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt; &amp;lt;br&amp;gt;As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from BSD which itself is a version of the UNIX scheduler. In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. The BSD scheduler was designed to work on a single core computer system and handle relatively small numbers of processes. As a result, managing resources with a scheduler which operates in O(n) time did not raise any performance issues for BSD. To ensure fairness, the scheduler would switch between processes every 0.1 seconds in a round-robin format [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity, specifically the addition of multiple processors, computer programs increased in size as well. Although the additional complexity increased what could be accomplished with a computer, it also highlighted the problem of having a O(n) scheduler; as more items are added to the scheduling algorithm, performance decreases. With symmetric multiprocessing becoming inevitable a better scheduler was required. This was the driving force behind the creation of FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the origianl FreeBSD scheduler was not built to handle Symmetric Multiprocessing or Symmetric Multithreading on multi-core systems. The sheduler was still limited by a O(n) algorithm which could not efficiently handle the loads required on ever increasingly powerful systems. To allow FreeBSD to operate with more modern computer systems, a new scheduler, the ULE scheduler, was necessary.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
In order to effectively manage multi-core computer systems, FreeBSD needed a scheduler with an algorithm which would execute in constant time regardless of the number of threads involved. The ULE scheduler was designed for this purpose. It is of interest to note that throughout the course of the BSD/FreeBSD scheduler evolution, each iteration has just been an improvement on existing scheduler technologies. Although each version was designed to provide support for some current reality of computing, like multi-core systems, the evolution was out of necessity and not due to a desire to re-evaluate how the current version accomplished its tasks.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;slow&amp;quot; evolution of the FreeBSD scheduler becomes even more evident when comparing it to the Linux scheduler which has evolved through a series of attempts to provide alternative ways to solve scheduling tasks. From dynamic time slices, to various data structure implimentations, and even various ways of describing prioritiy levels (see: &amp;quot;nice&amp;quot; levels), the Linux scheduler advancement has occurred through a series of drastic changes. In comparison, the FreeBSD scheduler has been changed only when the current version was no longer able to meet the needs of the existing computing climate.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:17, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
(Austin - I added a reference to one of your sections as the current reference only went to wikipedia which the prof has kind of implied is not a good idea, I also added another one that was to a blog post as that was another thing the prof mentioned was not the best idea. I am hoping this will provide additional alidations to the sources.)&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 00:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]], modified by [[User:Sschnei1|Sschnei1]] )&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969 [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html](Stallings: 2009). The early versions had relatively inefficient schedulers which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3569</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3569"/>
		<updated>2010-10-14T03:01:43Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Answer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-taksing. This method is also similar to &amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt; &amp;lt;br&amp;gt;As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from BSD which itself is a version of the UNIX scheduler. In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. The BSD scheduler was designed to work on a single core computer system and handle relatively small numbers of processes. As a result, managing resources with a scheduler which operates in O(n) time did not raise any performance issues for BSD. To ensure fairness, the scheduler would switch between processes every 0.1 seconds in a round-robin format [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity, specifically the addition of multiple processors, computer programs increased in size as well. Although the additional complexity increased what could be accomplished with a computer, it also highlighted the problem of having a O(n) scheduler; as more items are added to the scheduling algorithm, performance decreases. With symmetric multiprocessing becoming inevitable a better scheduler was required. This was the driving force behind the creation of FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the origianl FreeBSD scheduler was not built to handle Symmetric Multiprocessing or Symmetric Multithreading on multi-core systems. The sheduler was still limited by a O(n) algorithm which could not efficiently handle the loads required on ever increasingly powerful systems. To allow FreeBSD to operate with more modern computer systems, a new scheduler, the ULE scheduler, was necessary.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
In order to effectively manage multi-core computer systems, FreeBSD needed a scheduler with an algorithm which would execute in constant time regardless of the number of threads involved. The ULE scheduler was designed for this purpose. It is of interest to note that throughout the course of the BSD/FreeBSD scheduler evolution, each iteration has just been an improvement on existing scheduler technologies. Although each version was designed to provide support for some current reality of computing, like multi-core systems, the evolution was out of necessity and not due to a desire to re-evaluate how the current version accomplished its tasks.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;slow&amp;quot; evolution of the FreeBSD scheduler becomes even more evident when comparing it to the Linux scheduler which has evolved through a series of attempts to provide alternative ways to solve scheduling tasks. From dynamic time slices, to various data structure implimentations, and even various ways of describing prioritiy levels (see: &amp;quot;nice&amp;quot; levels), the Linux scheduler advancement has occurred through a series of drastic changes. In comparison, the FreeBSD scheduler has been changed only when the current version was no longer able to meet the needs of the existing computing climate.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:17, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
(Austin - I added a reference to one of your sections as the current reference only went to wikipedia which the prof has kind of implied is not a good idea, I also added another one that was to a blog post as that was another thing the prof mentioned was not the best idea. I am hoping this will provide additional alidations to the sources.)&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 00:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]], modified by [[User:Sschnei1|Sschnei1]] )&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969 [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html](Stallings: 2009). The early versions had relatively inefficient schedulers which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3567</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3567"/>
		<updated>2010-10-14T03:01:03Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Answer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-taksing. This method is also similar to &amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt; &amp;lt;br&amp;gt;As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from BSD which itself is a version of the UNIX scheduler. In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. The BSD scheduler was designed to work on a single core computer system and handle relatively small numbers of processes. As a result, managing resources with a scheduler which operates in O(n) time did not raise any performance issues for BSD. To ensure fairness, the scheduler would switch between processes every 0.1 seconds in a round-robin format [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity, specifically the addition of multiple processors, computer programs increased in size as well. Although the additional complexity increased what could be accomplished with a computer, it also highlighted the problem of having a O(n) scheduler; as more items are added to the scheduling algorithm, performance decreases. With symmetric multiprocessing becoming inevitable a better scheduler was required. This was the driving force behind the creation of FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the origianl FreeBSD scheduler was not built to handle Symmetric Multiprocessing or Symmetric Multithreading on multi-core systems. The sheduler was still limited by a O(n) algorithm which could not efficiently handle the loads required on ever increasingly powerful systems. To allow FreeBSD to operate with more modern computer systems, a new scheduler, the ULE scheduler, was necessary.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
In order to effectively manage multi-core computer systems, FreeBSD needed a scheduler with an algorithm which would execute in constant time regardless of the number of threads involved. The ULE scheduler was designed for this purpose. It is of interest to note that throughout the course of the BSD/FreeBSD scheduler evolution, each iteration has just been an improvement on existing scheduler technologies. Although each version was designed to provide support for some current reality of computing, like multi-core systems, the evolution was out of necessity and not due to a desire to re-evaluate how the current version accomplished its tasks.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;slow&amp;quot; evolution of the FreeBSD scheduler becomes even more evident when comparing it to the Linux scheduler which has evolved through a series of attempts to provide alternative ways to solve scheduling tasks. From dynamic time slices, to various data structure implimentations, and even various ways of describing prioritiy levels (see: &amp;quot;nice&amp;quot; levels), the Linux scheduler advancement has occurred through a series of drastic changes. In comparison, the FreeBSD scheduler has been changed only when the current version was no longer able to meet the needs of the existing computing climate.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:17, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
(Austin - I added a reference to one of your sections as the current reference only went to wikipedia which the prof has kind of implied is not a good idea, I also added another one that was to a blog post as that was another thing the prof mentioned was not the best idea. I am hoping this will provide additional alidations to the sources.)&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 00:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]], modified by [[User:Sschnei1|Sschnei1]] )&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969 [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html](Stallings: 2009). The early versions had relatively inefficient schedulers which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3565</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3565"/>
		<updated>2010-10-14T03:00:14Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Answer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
(modified by --[[User:AbsMechanik|AbsMechanik]] 03:00, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). &lt;br /&gt;
There are several different algorithms which are utilized in different schedulers, but a few key algorithms are outlined below[http://joshaas.net/linux/linux_cpu_scheduler.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;First-Come, First-Serve&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;FIFO&amp;lt;/b&amp;gt;): No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Shortest Job First&amp;lt;/b&amp;gt; (also known as &amp;lt;b&amp;gt;Shortest Remaining Time&amp;lt;/b&amp;gt; or &amp;lt;b&amp;gt;Shortest Process Next&amp;lt;/b&amp;gt;): Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Round-Robin Scheduling&amp;lt;/b&amp;gt;: Fair multi-tasking. This method is similar in concept to &amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Multilevel Feedback Queue Scheduling&amp;lt;/b&amp;gt;: Rule-based multi-taksing. This method is also similar to &amp;lt;b&amp;gt;Fixed-Priority Preemptive Scheduling&amp;lt;/b&amp;gt;, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt; As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from BSD which itself is a version of the UNIX scheduler. In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. The BSD scheduler was designed to work on a single core computer system and handle relatively small numbers of processes. As a result, managing resources with a scheduler which operates in O(n) time did not raise any performance issues for BSD. To ensure fairness, the scheduler would switch between processes every 0.1 seconds in a round-robin format [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity, specifically the addition of multiple processors, computer programs increased in size as well. Although the additional complexity increased what could be accomplished with a computer, it also highlighted the problem of having a O(n) scheduler; as more items are added to the scheduling algorithm, performance decreases. With symmetric multiprocessing becoming inevitable a better scheduler was required. This was the driving force behind the creation of FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the origianl FreeBSD scheduler was not built to handle Symmetric Multiprocessing or Symmetric Multithreading on multi-core systems. The sheduler was still limited by a O(n) algorithm which could not efficiently handle the loads required on ever increasingly powerful systems. To allow FreeBSD to operate with more modern computer systems, a new scheduler, the ULE scheduler, was necessary.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
In order to effectively manage multi-core computer systems, FreeBSD needed a scheduler with an algorithm which would execute in constant time regardless of the number of threads involved. The ULE scheduler was designed for this purpose. It is of interest to note that throughout the course of the BSD/FreeBSD scheduler evolution, each iteration has just been an improvement on existing scheduler technologies. Although each version was designed to provide support for some current reality of computing, like multi-core systems, the evolution was out of necessity and not due to a desire to re-evaluate how the current version accomplished its tasks.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;slow&amp;quot; evolution of the FreeBSD scheduler becomes even more evident when comparing it to the Linux scheduler which has evolved through a series of attempts to provide alternative ways to solve scheduling tasks. From dynamic time slices, to various data structure implimentations, and even various ways of describing prioritiy levels (see: &amp;quot;nice&amp;quot; levels), the Linux scheduler advancement has occurred through a series of drastic changes. In comparison, the FreeBSD scheduler has been changed only when the current version was no longer able to meet the needs of the existing computing climate.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:17, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
(Austin - I added a reference to one of your sections as the current reference only went to wikipedia which the prof has kind of implied is not a good idea, I also added another one that was to a blog post as that was another thing the prof mentioned was not the best idea. I am hoping this will provide additional alidations to the sources.)&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 00:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]], modified by [[User:Sschnei1|Sschnei1]] )&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969 [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html](Stallings: 2009). The early versions had relatively inefficient schedulers which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3557</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3557"/>
		<updated>2010-10-14T02:37:24Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Answer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
(This work was modified by --[[User:AbsMechanik|AbsMechanik]] 02:37, 14 October 2010 (UTC) )&lt;br /&gt;
There are five basic algorithms for allocating CPU time[http://joshaas.net/linux/linux_cpu_scheduler.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;First-in, First-out: No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Shortest Time Remaining: Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Fixed-Priority Preemptive Scheduling: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Round-Robin Scheduling: Fair multi-tasking. This method is similar in concept to Fixed-Priority Preemptive Scheduling, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time. The Round-Robin Scheduling is used in Linux-1.2&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Multilevel Queue Scheduling: Rule-based multi-taksing. This method is also similar to Fixed-Priority Preemptive Scheduling, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system. The O(1) algorithm in 2.6 up to 2.6.23 is based on a Multilevel Queue.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from BSD which itself is a version of the UNIX scheduler. In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. The BSD scheduler was designed to work on a single core computer system and handle relatively small numbers of processes. As a result, managing resources with a scheduler which operates in O(n) time did not raise any performance issues for BSD. To ensure fairness, the scheduler would switch between processes every 0.1 seconds in a round-robin format [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity, specifically the addition of multiple processors, computer programs increased in size as well. Although the additional complexity increased what could be accomplished with a computer, it also highlighted the problem of having a O(n) scheduler; as more items are added to the scheduling algorithm, performance decreases. With symmetric multiprocessing becoming inevitable a better scheduler was required. This was the driving force behind the creation of FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the origianl FreeBSD scheduler was not built to handle Symmetric Multiprocessing or Symmetric Multithreading on multi-core systems. The sheduler was still limited by a O(n) algorithm which could not efficiently handle the loads required on ever increasingly powerful systems. To allow FreeBSD to operate with more modern computer systems, a new scheduler, the ULE scheduler, was necessary.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
In order to effectively manage multi-core computer systems, FreeBSD needed a scheduler with an algorithm which would execute in constant time regardless of the number of threads involved. The ULE scheduler was designed for this purpose. It is of interest to note that throughout the course of the BSD/FreeBSD scheduler evolution, each iteration has just been an improvement on existing scheduler technologies. Although each version was designed to provide support for some current reality of computing, like multi-core systems, the evolution was out of necessity and not due to a desire to re-evaluate how the current version accomplished its tasks.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;slow&amp;quot; evolution of the FreeBSD scheduler becomes even more evident when comparing it to the Linux scheduler which has evolved through a series of attempts to provide alternative ways to solve scheduling tasks. From dynamic time slices, to various data structure implimentations, and even various ways of describing prioritiy levels (see: &amp;quot;nice&amp;quot; levels), the Linux scheduler advancement has occurred through a series of drastic changes. In comparison, the FreeBSD scheduler has been changed only when the current version was no longer able to meet the needs of the existing computing climate.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:17, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
(Austin - I added a reference to one of your sections as the current reference only went to wikipedia which the prof has kind of implied is not a good idea, I also added another one that was to a blog post as that was another thing the prof mentioned was not the best idea. I am hoping this will provide additional alidations to the sources.)&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 00:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]], modified by [[User:Sschnei1|Sschnei1]] )&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969 [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html](Stallings: 2009). The early versions had relatively inefficient schedulers which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3550</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3550"/>
		<updated>2010-10-14T02:35:01Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Overview &amp;amp; History */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from BSD which itself is a version of the UNIX scheduler. In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. The BSD scheduler was designed to work on a single core computer system and handle relatively small numbers of processes. As a result, managing resources with a scheduler which operates in O(n) time did not raise any performance issues for BSD. To ensure fairness, the scheduler would switch between processes every 0.1 seconds in a round-robin format [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity, specifically the addition of multiple processors, computer programs increased in size as well. Although the additional complexity increased what could be accomplished with a computer, it also highlighted the problem of having a O(n) scheduler; as more items are added to the scheduling algorithm, performance decreases. With symmetric multiprocessing becoming inevitable a better scheduler was required. This was the driving force behind the creation of FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the origianl FreeBSD scheduler was not built to handle Symmetric Multiprocessing or Symmetric Multithreading on multi-core systems. The sheduler was still limited by a O(n) algorithm which could not efficiently handle the loads required on ever increasingly powerful systems. To allow FreeBSD to operate with more modern computer systems, a new scheduler, the ULE scheduler, was necessary.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
In order to effectively manage multi-core computer systems, FreeBSD needed a scheduler with an algorithm which would execute in constant time regardless of the number of threads involved. The ULE scheduler was designed for this purpose. It is of interest to note that throughout the course of the BSD/FreeBSD scheduler evolution, each iteration has just been an improvement on existing scheduler technologies. Although each version was designed to provide support for some current reality of computing, like multi-core systems, the evolution was out of necessity and not due to a desire to re-evaluate how the current version accomplished its tasks.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;slow&amp;quot; evolution of the FreeBSD scheduler becomes even more evident when comparing it to the Linux scheduler which has evolved through a series of attempts to provide alternative ways to solve scheduling tasks. From dynamic time slices, to various data structure implimentations, and even various ways of describing prioritiy levels (see: &amp;quot;nice&amp;quot; levels), the Linux scheduler advancement has occurred through a series of drastic changes. In comparison, the FreeBSD scheduler has been changed only when the current version was no longer able to meet the needs of the existing computing climate.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:17, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
(Austin - I added a reference to one of your sections as the current reference only went to wikipedia which the prof has kind of implied is not a good idea, I also added another one that was to a blog post as that was another thing the prof mentioned was not the best idea. I am hoping this will provide additional alidations to the sources.)&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 00:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]], modified by [[User:Sschnei1|Sschnei1]] )&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969 [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html](Stallings: 2009). The early versions had relatively inefficient schedulers which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3545</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3545"/>
		<updated>2010-10-14T02:29:38Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Tabulated Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:01, 13 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system (Jensen: 1985). As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 23:41, 13 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally inherited its scheduler from BSD which itself is a version of the UNIX scheduler. In order to understand the evolution of the FreeBSD scheduler it is important to understand the original purpose and limitations of the BSD scheduler. The BSD scheduler was designed to work on a single core computer system and handle relatively small numbers of processes. As a result, managing resources with a scheduler which operates in O(n) time did not raise any performance issues for BSD. To ensure fairness, the scheduler would switch between processes every 0.1 seconds in a round-robin format [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf].&lt;br /&gt;
&lt;br /&gt;
As computer systems increased in complexity, specifically the addition of multiple processors, computer programs increased in size as well. Although the additional complexity increased what could be accomplished with a computer, it also highlighted the problem of having a O(n) scheduler; as more items are added to the scheduling algorithm, performance decreases. With symmetric multiprocessing becoming inevitable a better scheduler was required. This was the driving force behind the creation of FreeBSD.&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to --[[User:Mike Preston|Mike Preston]] 00:02, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
The FreeBSD kernel originally used an enhanced version of the BSD scheduler. Specifically, the FreeBSD scheduler included classes of threads which was a drastic change from the round-robin scheduling used in BSD. Initially, there were two types of thread class, real-time and idle [https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf], and the scheduler would give processor time to real-time threads first and the idle threads had to wait until there were no real-time threads that needed access to the processor.&lt;br /&gt;
&lt;br /&gt;
To manage the various threads, FreeBSD had data structures called runqueues into which the threads were placed. The scheduler would evaluate the runqueues based on priority from highest to lowest and execute the first thread of a non-empty runqueue it found. Once a non-empty runqueue was found, each thread in the runqueue would be assigned an equal value time slice of 0.1 seconds, a value that has not changed in over 20 years [http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156]. &lt;br /&gt;
&lt;br /&gt;
Unfortunately, like the BSD scheduler it was based on, the origianl FreeBSD scheduler was not built to handle Symmetric Multiprocessing or Symmetric Multithreading on multi-core systems. The sheduler was still limited by a O(n) algorithm which could not efficiently handle the loads required on ever increasingly powerful systems. To allow FreeBSD to operate with more modern computer systems, a new scheduler, the ULE scheduler, was necessary.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work is owned by --[[User:Mike Preston|Mike Preston]] 00:23, 14 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
In order to effectively manage multi-core computer systems, FreeBSD needed a scheduler with an algorithm which would execute in constant time regardless of the number of threads involved. The ULE scheduler was designed for this purpose. It is of interest to note that throughout the course of the BSD/FreeBSD scheduler evolution, each iteration has just been an improvement on existing scheduler technologies. Although each version was designed to provide support for some current reality of computing, like multi-core systems, the evolution was out of necessity and not due to a desire to re-evaluate how the current version accomplished its tasks.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;slow&amp;quot; evolution of the FreeBSD scheduler becomes even more evident when comparing it to the Linux scheduler which has evolved through a series of attempts to provide alternative ways to solve scheduling tasks. From dynamic time slices, to various data structure implimentations, and even various ways of describing prioritiy levels (see: &amp;quot;nice&amp;quot; levels), the Linux scheduler advancement has occurred through a series of drastic changes. In comparison, the FreeBSD scheduler has been changed only when the current version was no longer able to meet the needs of the existing computing climate.&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
(Note to the other group members: Feel free to modify or remove anything I post here. I&#039;m just trying to piece together what you&#039;ve all posted in the discussion section and turn it into a single paragraph. You know. Just to see how it looks.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 22:17, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
(Same for me, I&#039;m trying to put together the overview/history and work on the comparison section of the essay, all based off the history you guys give. If I miss anything or get anything wrong, feel free to correct.)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
(Austin - I added a reference to one of your sections as the current reference only went to wikipedia which the prof has kind of implied is not a good idea, I also added another one that was to a blog post as that was another thing the prof mentioned was not the best idea. I am hoping this will provide additional alidations to the sources.)&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 00:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Wlawrenc|Wesley Lawrence]])&lt;br /&gt;
&lt;br /&gt;
The Linux scheduler has a large history of improvement, always aiming towards having a fair and fast scheduler. Various methods and concepts have been tried over different versions to get this fair and fast scheduler, including round robins, iteration, and queues. A quick read through of the history of Linux implies that firstly, equal and balanced use of the system was the goal of the scheduler, and once that was in place, speed was soon improved. Early schedulers did their best to give processes equal time and resources, but used a bit of extra time (in computer terms) to accomplish this. By Linux 2.6, after experimenting with different concepts, the scheduler was able to provide fair access and time, as well as run as quickly as possible, with various features to allow personal tweaking by the system user, or even the processes themselves.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]], modified by [[User:Sschnei1|Sschnei1]] )&lt;br /&gt;
&lt;br /&gt;
The Linux kernel has undergone many changes over the decades since its original release as the UNIX operating system in 1969 [http://www.unix.com/whats-your-mind/110099-unix-40th-birthday.html](Stallings: 2009). The early versions had relatively inefficient schedulers which operated in linear time with respect to the number of tasks to schedule; currently the Linux scheduler is able to operate in constant time, independent of the number of tasks being scheduled.&lt;br /&gt;
&lt;br /&gt;
There are five basic algorithms for allocating CPU time[http://en.wikipedia.org/wiki/Scheduling_(computing)#Scheduling_disciplines][http://joshaas.net/linux/linux_cpu_scheduler.pdf]: &amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;First-in, First-out: No multi-tasking. Processes are queued in the order they are called. A process gets full, uninterrupted use of the CPU until it has finished running.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Shortest Time Remaining: Limited multi-tasking. The CPU handles the easiest tasks first, and complex, time-consuming tasks are handled last.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Fixed-Priority Preemptive Scheduling: Greater multi-tasking. Processes are assigned priority levels which are independent of their complexity. High-priority processes can be completed quickly, while low-priority processes can take a long time as new, higher-priority processes arrive and interrupt them.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Round-Robin Scheduling: Fair multi-tasking. This method is similar in concept to Fixed-Priority Preemptive Scheduling, but all processes are assigned the same priority level; that is, every running process is given an equal share of CPU time. The Round-Robin Scheduling is used in Linux-1.2&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Multilevel Queue Scheduling: Rule-based multi-taksing. This method is also similar to Fixed-Priority Preemptive Scheduling, but processes are associated with groups that help determine how high their priorities are. For example, all I/O tasks get low priority since much time is spent waiting for the user to interact with the system. The O(1) algorithm in 2.6 up to 2.6.23 is based on a Multilevel Queue.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
(This work belongs to [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was the first scheduler which supported SMP. &lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a scheduling event. The scheduler divided tasks into epochs, allowing each task to execute up to its time slice. If a task did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:Sschnei1|Sschnei1]])&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual runtime. &lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted into the red-black tree. This means tasks on the left side are given time to execute, while the contents on the right side of the tree are migrated to the left side to maintain fairness.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(This work was done by [[User:abondio2|Austin Bondio]])&lt;br /&gt;
&lt;br /&gt;
Under a recent Linux system (version 2.6.35 or later), scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19. &lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
(Once I read/see some history on the BSD section above, I&#039;ll do the best comparison I can. I&#039;m balancing 3000/3004 and other courses (like most of you), so I don&#039;t think I can research/write BSD and write the comparison, but I will try to help out as much as I can)&lt;br /&gt;
&lt;br /&gt;
-- [[User:Wlawrenc|Wesley Lawrence]]&lt;br /&gt;
&lt;br /&gt;
I&#039;ve got this. Hopefully most of the sections I created properly answer the question. I&#039;m still going to go over everyone&#039;s answers and keep in mind that wikipedia cannot be cited as a resource. --[[User:AbsMechanik|AbsMechanik]] 02:29, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&lt;br /&gt;
1. Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.&lt;br /&gt;
&lt;br /&gt;
2. Stallings, William, Operating Systems: Internals and Design Principles, Pearson Prentice Hall, 2009.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3339</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3339"/>
		<updated>2010-10-13T20:37:41Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Answer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system. As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
==Tabulated Results==&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3337</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3337"/>
		<updated>2010-10-13T20:35:28Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Answer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system. As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3329</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3329"/>
		<updated>2010-10-13T19:51:42Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Answer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.[[File:Real-Time Operating Systems.pdf]] As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3328</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3328"/>
		<updated>2010-10-13T19:47:04Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Answer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==BSD/Free BSD Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Linux Schedulers==&lt;br /&gt;
&lt;br /&gt;
===Overview &amp;amp; History===&lt;br /&gt;
&lt;br /&gt;
===Older Versions===&lt;br /&gt;
&lt;br /&gt;
===Current Version===&lt;br /&gt;
&lt;br /&gt;
==Current Challenges==&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3307</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3307"/>
		<updated>2010-10-13T18:19:26Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Resources=&lt;br /&gt;
&lt;br /&gt;
I just moved the Resources section to our discussion page --[[User:AbsMechanik|AbsMechanik]] 18:19, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I found some resources, which might be useful to answer this question. As far as I know, FreeBSD uses a Multilevel feeback queue and Linux uses in the current version the completly fair scheduler.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Some text about FreeBSD-scheduling http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-ULE Thread Scheduler: http://www.scribd.com/doc/3299978/ULE-Thread-Scheduler-for-FreeBSD&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Completly Fair Scheduler: http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Brain Fuck Scheduler: http://en.wikipedia.org/wiki/Brain_Fuck_Scheduler&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Sebastian&lt;br /&gt;
&lt;br /&gt;
Also found a nice link with regards to the new Linux Scheduler for those interested:&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-scheduler/&lt;br /&gt;
&amp;lt;br /&amp;gt;It is also referred to as the O(1) scheduler in algorithmic terms (CFS is O(log(n)) scheduler). Both have been in development by Ingo Molnár.&lt;br /&gt;
-Abhinav&lt;br /&gt;
&lt;br /&gt;
Some more resources;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html (includes history of Linux scheduler from 1.2 to 2.6)&amp;lt;br /&amp;gt;&lt;br /&gt;
http://my.opera.com/blu3c4t/blog/show.dml/1531517 &amp;lt;br /&amp;gt;&lt;br /&gt;
-Wes&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
Information on changes to the O(1) scheduler:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Linux Kernel Documentation&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
General information on Linux Job Scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Linux Job Scheduling | Linux Journal&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.linuxjournal.com/article/4087&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Scheduling on multi-core Linux machines:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Node affine NUMA scheduler for Linux&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://home.arcor.de/efocht/sched/&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
More on Linux process scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Understanding the Linux kernel&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://oreilly.com/catalog/linuxkernel/chapter/ch10.html&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
FreeBSD thread scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;InformIT: FreeBSD Process Management&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&amp;lt;br /&amp;gt;&lt;br /&gt;
- Austin Bondio&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a part to introduce the several schedulers for LINUX. We might need to change the reference, since I got it all from http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:27, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write down our contact emails and names to write down who would like to write what part.&lt;br /&gt;
&lt;br /&gt;
Another suggestion is that someone should read over the text and compare it to the references posted in the &amp;quot;Sources&amp;quot; section and check if someone is doing plagiarism. &lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider - sebastian@gamersblog.ca&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi, here&#039;s a little forward on schedulers in relation to types of threads I&#039;ve composed based off of one of my sources, I&#039;m not sure if its necessary since there is one Mike typed below, but here it just for you guys to examine:&lt;br /&gt;
&lt;br /&gt;
Threads that perform a lot of I/O require a fast response time to keep input and output devices busy, but need little CPU time. On the other hand, compute-bound threads need to receive a lot of CPU time to finish their work, but have no requirement for fast response time. Other threads lie somewhere in between, with periods of I/O punctuated by periods of computation, and thus have requirements that vary over time. A well-designed scheduler should be able accommodate threads with all these requirements simultaneously.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Also: as Mike said earlier about BSD&#039;s issue with locking mechanisms, should I go into greater detail about that, or just include a little, few sentence description of the issue? I&#039;ve found a source for what I think is what he was referring to: http://security.freebsd.org/advisories/FreeBSD-EN-10:02.sched_ule.asc&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 17:54, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be &lt;br /&gt;
efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea &lt;br /&gt;
of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was &lt;br /&gt;
the first scheduler which supported SMP.&lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its &lt;br /&gt;
predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a &lt;br /&gt;
scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task &lt;br /&gt;
did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute &lt;br /&gt;
longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and &lt;br /&gt;
did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware &lt;br /&gt;
architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
Linux-2.6 introduced another scheduler up to Linux 2.6.23. Before Linux 2.6.23 an O(1) scheduler was used. It needed the &lt;br /&gt;
same amount of time for each task to execute, independent of how big the tasks were.It kept track of the tasks in a &lt;br /&gt;
running queue. The scheduler offered much more scalability. To determine if a task was I/O bound or processor bound the &lt;br /&gt;
scheduler used interactive metrics with numerous heuristics. Because the code was difficult to manage and the most part of &lt;br /&gt;
the code was to calculate heuristics, it was replaced in Linux 2.6.23 with the CFS scheduler, which is the current &lt;br /&gt;
scheduler in the actual Linux versions.&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining &lt;br /&gt;
fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. &lt;br /&gt;
When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep &lt;br /&gt;
fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual &lt;br /&gt;
runtime.&lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing &lt;br /&gt;
and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. &lt;br /&gt;
Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu &lt;br /&gt;
are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The &lt;br /&gt;
scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted &lt;br /&gt;
into the red-black tree. This means  tasks on the left side are given time to execute, while the contents on the right side &lt;br /&gt;
of the tree are migrated to the left side to maintain fairness. [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:26, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve started writing a bit about the Linux O(1) scheduler:&lt;br /&gt;
&lt;br /&gt;
Under a Linux system, scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19.&lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.[http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt]&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.[http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726]&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 14:39, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;m writing on a contrast of the CFS scheduler right now, please don&#039;t edit it.&lt;br /&gt;
&lt;br /&gt;
In contrast the the O(1) scheduler, CFS realizes the model of a scheduler which can execute precise on real multitasking on real hardware. Precise multitasking means that each process can run at equal speed. If 4 processes are running at the same time, CFS assigns 25% of the CPU time to each process. On real hardware, only one task can be executed at a time and other tasks have to wait, which gives the running tasks an unfair amount of CPU time.&lt;br /&gt;
&lt;br /&gt;
To avoid an unfair balance over the processes, CFS has a wait run-time for each process. CFS tries to pick the process with the highest wait run-time value. To provide a real multitasking, CFS splits up the CPU time between running processes. &lt;br /&gt;
&lt;br /&gt;
Processes are not stored in a run queue, but in a self-balancing red-black tree, where self-balancing means that the task with the highest need for CPU time is stored in the most left node.  Tasks with a lower need for CPU time are stored on the right side of the Tree, where tasks with a higher need for CPU time are stored on the left side.  The task on the left side is picked by the scheduler and given CPU time to run. The tree re-balances itself  and new tasks can be inserted.&lt;br /&gt;
&lt;br /&gt;
CFS is designed in a way that it does not need timeslicing. This is due to the nanosecond granularity, which removes the need for jiffies or other HZ details. [http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt]&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 16:32, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey guys, sorry I&#039;ve been non-existent for the past little bit, here&#039;s what I&#039;ve done so far. I&#039;ve been going through stuff on the 4BSD and ULE schedulers, here&#039;s what I have so far:&lt;br /&gt;
&lt;br /&gt;
In order for FreeBSD to function, it requires a scheduler to be selected at the time the kernel is built. Also, all calls to scheduling code are resolved at compile time, meaning that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
&lt;br /&gt;
[3] The 4BSD scheduler was a general-purpose scheduler. Its primary goal was to balance threads’ different scheduling requirements. FreeBSD&#039;s time-share-scheduling algorithm is based on multilevel feedback queues. The system adjusts the priority of a thread dynamically to reflect resource requirements and the amount consumed by the thread. Based on the thread&#039;s priority, it gets moved between run queues. When a new thread attains a higher priority than the currently running one, the system immediately switches to the new thread, if it&#039;s in user mode. Otherwise, the system switches as soon as the current thread leaves the kernel. The system scans the run queues in order of highest to lowest priority, and executes the first thread of the first non-empty run queue it finds. The system tailors it&#039;s short-term scheduling algorithm to favor user-interactive jobs by raising the priority of threads waiting for I/O for one or more seconds, and by lowering the priority of threads that hog up significant amounts of CPU time.&lt;br /&gt;
&lt;br /&gt;
[1] In older BSD systems, (and I mean old, as in 20 or so years ago), a 1 second quantum was used for the round-robin scheduling algorithm. Later, in BSD 4.2, it did rescheduling every 0.1 seconds, and priority re-computation every second, and these values haven’t changed since.  Round-robin scheduling is done by a timeout mechanism, which informs the clock interrupt driver to call a certain system routine after a specified interval. The subroutine to be called, in this case, causes the rescheduling and then resubmits a timeout to call itself again 0.1 sec later. The priority re-computation is also timed by a subroutine that resubmits a timeout for itself. &lt;br /&gt;
&lt;br /&gt;
The ULE Scheduler was first introduced in FreeBSD 5, however disabled by default in favor of the default 4BSD scheduler. It was not until FreeBSD 7.1 that the ULE scheduler became the new default. The ULE scheduler was an overhaul of the original scheduler, and allowed it support for symmetric multiprocessing (SMP), support for symmetric multithreading (SMT) on multi-core systems, and improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&amp;lt;more to come&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1 = http://www.cim.mcgill.ca/~franco/OpSys-304-427/lecture-notes/node46.html&lt;br /&gt;
2 = http://security.freebsd.org/advisories/FreeBSD-EN-10:02.sched_ule.asc&lt;br /&gt;
3 = McKusick, M. K. and Neville-Neil, G. V. 2004. Thread Scheduling in FreeBSD 5.2. Queue 2, 7 (Oct. 2004), 58-64. DOI= http://doi.acm.org/10.1145/1035594.1035622&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notes: Lots of this is just paraphrasing stuff you guys said in the discussion section. In terms of citations, should it be a superscripted citation next to the fact snippet we used, or should it just be a list of sources at the bottom?&lt;br /&gt;
&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 17:51, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Sources =&lt;br /&gt;
&lt;br /&gt;
[1] http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
[2] http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&lt;br /&gt;
&lt;br /&gt;
[3] http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726&lt;br /&gt;
&lt;br /&gt;
[4] http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3306</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=3306"/>
		<updated>2010-10-13T18:18:52Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3305</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=3305"/>
		<updated>2010-10-13T18:18:38Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Resources=&lt;br /&gt;
&lt;br /&gt;
I found some resources, which might be useful to answer this question. As far as I know, FreeBSD uses a Multilevel feeback queue and Linux uses in the current version the completly fair scheduler.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Some text about FreeBSD-scheduling http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-ULE Thread Scheduler: http://www.scribd.com/doc/3299978/ULE-Thread-Scheduler-for-FreeBSD&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Completly Fair Scheduler: http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Brain Fuck Scheduler: http://en.wikipedia.org/wiki/Brain_Fuck_Scheduler&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Sebastian&lt;br /&gt;
&lt;br /&gt;
Also found a nice link with regards to the new Linux Scheduler for those interested:&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-scheduler/&lt;br /&gt;
&amp;lt;br /&amp;gt;It is also referred to as the O(1) scheduler in algorithmic terms (CFS is O(log(n)) scheduler). Both have been in development by Ingo Molnár.&lt;br /&gt;
-Abhinav&lt;br /&gt;
&lt;br /&gt;
Some more resources;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html (includes history of Linux scheduler from 1.2 to 2.6)&amp;lt;br /&amp;gt;&lt;br /&gt;
http://my.opera.com/blu3c4t/blog/show.dml/1531517 &amp;lt;br /&amp;gt;&lt;br /&gt;
-Wes&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
Information on changes to the O(1) scheduler:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Linux Kernel Documentation&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
General information on Linux Job Scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Linux Job Scheduling | Linux Journal&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.linuxjournal.com/article/4087&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Scheduling on multi-core Linux machines:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Node affine NUMA scheduler for Linux&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://home.arcor.de/efocht/sched/&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
More on Linux process scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;Understanding the Linux kernel&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://oreilly.com/catalog/linuxkernel/chapter/ch10.html&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
FreeBSD thread scheduling:&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;quot;InformIT: FreeBSD Process Management&amp;quot;&amp;lt;br /&amp;gt;&lt;br /&gt;
http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&amp;lt;br /&amp;gt;&lt;br /&gt;
- Austin Bondio&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added a part to introduce the several schedulers for LINUX. We might need to change the reference, since I got it all from http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:27, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write down our contact emails and names to write down who would like to write what part.&lt;br /&gt;
&lt;br /&gt;
Another suggestion is that someone should read over the text and compare it to the references posted in the &amp;quot;Sources&amp;quot; section and check if someone is doing plagiarism. &lt;br /&gt;
&lt;br /&gt;
Sebastian Schneider - sebastian@gamersblog.ca&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi, here&#039;s a little forward on schedulers in relation to types of threads I&#039;ve composed based off of one of my sources, I&#039;m not sure if its necessary since there is one Mike typed below, but here it just for you guys to examine:&lt;br /&gt;
&lt;br /&gt;
Threads that perform a lot of I/O require a fast response time to keep input and output devices busy, but need little CPU time. On the other hand, compute-bound threads need to receive a lot of CPU time to finish their work, but have no requirement for fast response time. Other threads lie somewhere in between, with periods of I/O punctuated by periods of computation, and thus have requirements that vary over time. A well-designed scheduler should be able accommodate threads with all these requirements simultaneously.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Also: as Mike said earlier about BSD&#039;s issue with locking mechanisms, should I go into greater detail about that, or just include a little, few sentence description of the issue? I&#039;ve found a source for what I think is what he was referring to: http://security.freebsd.org/advisories/FreeBSD-EN-10:02.sched_ule.asc&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 17:54, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Linux 1.2 a scheduler operated with a round robin policy using a circular queue, allowing the scheduler to be &lt;br /&gt;
efficient in adding and removing processes. When Linux 2.2 was introduced, the scheduler was changed. It now used the idea &lt;br /&gt;
of scheduling classes, thus allowing it to schedule real-time tasks, non real-time tasks, and non-preemptible tasks. It was &lt;br /&gt;
the first scheduler which supported SMP.&lt;br /&gt;
&lt;br /&gt;
With the introduction of Linux 2.4, the scheduler was changed again. The scheduler started to be more complex than its &lt;br /&gt;
predecessors, but it also has more features. The running time was O(n) because it iterated over each task during a &lt;br /&gt;
scheduling event. The scheduler divided tasks into epochs, allowing each tasks to execute up to its time slice. If a task &lt;br /&gt;
did not use up all of its time slice, the remaining time was added to the next time slice to allow the task to execute &lt;br /&gt;
longer in its next epoch. The scheduler simply iterated over all tasks, which made it inefficient, low in scalability and &lt;br /&gt;
did not have a useful support for real-time systems. On top of that, it did not have features to exploit new hardware &lt;br /&gt;
architectures, such as multi-core processors.&lt;br /&gt;
&lt;br /&gt;
Linux-2.6 introduced another scheduler up to Linux 2.6.23. Before Linux 2.6.23 an O(1) scheduler was used. It needed the &lt;br /&gt;
same amount of time for each task to execute, independent of how big the tasks were.It kept track of the tasks in a &lt;br /&gt;
running queue. The scheduler offered much more scalability. To determine if a task was I/O bound or processor bound the &lt;br /&gt;
scheduler used interactive metrics with numerous heuristics. Because the code was difficult to manage and the most part of &lt;br /&gt;
the code was to calculate heuristics, it was replaced in Linux 2.6.23 with the CFS scheduler, which is the current &lt;br /&gt;
scheduler in the actual Linux versions.&lt;br /&gt;
&lt;br /&gt;
As of the Linux 2.6.23 introduction the CFS scheduler took its place in the kernel. CFS uses the idea of maintaining &lt;br /&gt;
fairness in providing processor time to tasks, which means each tasks gets a fair amount of time to run on the processor. &lt;br /&gt;
When the time task is out of balance, it means the tasks has to be given more time because the scheduler has to keep &lt;br /&gt;
fairness. To determine the balance, the CFS maintains the amount of time given to a task, which is called a virtual &lt;br /&gt;
runtime.&lt;br /&gt;
&lt;br /&gt;
The model how the CFS executes has changed, too. The scheduler now runs a time-ordered red-black tree. It is self-balancing &lt;br /&gt;
and runs in O(log n) where n is the amount of nodes in the tree, allowing the scheduler to add and erase tasks efficiently. &lt;br /&gt;
Tasks with the most need of processor are stored in the left side of the tree. Therefore, tasks with a lower need of cpu &lt;br /&gt;
are stored in the right side of the tree. To keep fairness the scheduler takes the left most node from the tree. The &lt;br /&gt;
scheduler then accounts execution time at the CPU and adds it to the virtual runtime. If runnable the task then is inserted &lt;br /&gt;
into the red-black tree. This means  tasks on the left side are given time to execute, while the contents on the right side &lt;br /&gt;
of the tree are migrated to the left side to maintain fairness. [http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 19:26, 9 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ve started writing a bit about the Linux O(1) scheduler:&lt;br /&gt;
&lt;br /&gt;
Under a Linux system, scheduling can be handled manually by the user by assigning programs different priority levels, called &amp;quot;nice levels.&amp;quot; Put simply, the higher a program&#039;s nice level is, the nicer it will be about sharing system resources. A program with a lower nice level will be more greedy, and a program with a higher nice level will more readily give up its CPU time to other, more important programs. This spectrum is not linear; programs with high negative nice levels run significantly faster than those with high positive nice levels. The Linux scheduler accomplishes this by sharing CPU usage in terms of time slices (also called quanta), which refer to the length of time a program can use the CPU before being forced to give it up. High-priority programs get much larger time slices, allowing them to use the CPU more often and for longer periods of time than programs with lower priority. Users can adjust the niceness of a program using the shell command nice( ). Nice values can range from -20 to +19.&lt;br /&gt;
&lt;br /&gt;
In previous versions of Linux, the scheduler was dependent on the clock speed of the processor. While this dependency was an effective way of dividing up time slices, it made it impossible for the Linux developers to fine-tune their scheduler to perfection. In recent releases, specific nice levels are assigned fixed-size time slices instead. This keeps nice programs from trying to muscle in on the CPU time of less nice programs, and also stops the less nice programs from stealing more time than they deserve.[http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt]&lt;br /&gt;
&lt;br /&gt;
In addition to this fixed style of time slice allocation, Linux schedulers also have a more dynamic feature which causes them to monitor all active programs. If a program has been waiting an abnormally long time to use the processor, it will be given a temporary increase in priority to compensate. Similarly, if a program has been hogging CPU time, it will temporarily be given a lower priority rating.[http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726]&lt;br /&gt;
&lt;br /&gt;
-- [[User:abondio2|Austin Bondio]] Last edit: 14:39, 12 October 2010&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;m writing on a contrast of the CFS scheduler right now, please don&#039;t edit it.&lt;br /&gt;
&lt;br /&gt;
In contrast the the O(1) scheduler, CFS realizes the model of a scheduler which can execute precise on real multitasking on real hardware. Precise multitasking means that each process can run at equal speed. If 4 processes are running at the same time, CFS assigns 25% of the CPU time to each process. On real hardware, only one task can be executed at a time and other tasks have to wait, which gives the running tasks an unfair amount of CPU time.&lt;br /&gt;
&lt;br /&gt;
To avoid an unfair balance over the processes, CFS has a wait run-time for each process. CFS tries to pick the process with the highest wait run-time value. To provide a real multitasking, CFS splits up the CPU time between running processes. &lt;br /&gt;
&lt;br /&gt;
Processes are not stored in a run queue, but in a self-balancing red-black tree, where self-balancing means that the task with the highest need for CPU time is stored in the most left node.  Tasks with a lower need for CPU time are stored on the right side of the Tree, where tasks with a higher need for CPU time are stored on the left side.  The task on the left side is picked by the scheduler and given CPU time to run. The tree re-balances itself  and new tasks can be inserted.&lt;br /&gt;
&lt;br /&gt;
CFS is designed in a way that it does not need timeslicing. This is due to the nanosecond granularity, which removes the need for jiffies or other HZ details. [http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt]&lt;br /&gt;
&lt;br /&gt;
-- [[User:Sschnei1|Sschnei1]] 16:32, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hey guys, sorry I&#039;ve been non-existent for the past little bit, here&#039;s what I&#039;ve done so far. I&#039;ve been going through stuff on the 4BSD and ULE schedulers, here&#039;s what I have so far:&lt;br /&gt;
&lt;br /&gt;
In order for FreeBSD to function, it requires a scheduler to be selected at the time the kernel is built. Also, all calls to scheduling code are resolved at compile time, meaning that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
&lt;br /&gt;
[3] The 4BSD scheduler was a general-purpose scheduler. Its primary goal was to balance threads’ different scheduling requirements. FreeBSD&#039;s time-share-scheduling algorithm is based on multilevel feedback queues. The system adjusts the priority of a thread dynamically to reflect resource requirements and the amount consumed by the thread. Based on the thread&#039;s priority, it gets moved between run queues. When a new thread attains a higher priority than the currently running one, the system immediately switches to the new thread, if it&#039;s in user mode. Otherwise, the system switches as soon as the current thread leaves the kernel. The system scans the run queues in order of highest to lowest priority, and executes the first thread of the first non-empty run queue it finds. The system tailors it&#039;s short-term scheduling algorithm to favor user-interactive jobs by raising the priority of threads waiting for I/O for one or more seconds, and by lowering the priority of threads that hog up significant amounts of CPU time.&lt;br /&gt;
&lt;br /&gt;
[1] In older BSD systems, (and I mean old, as in 20 or so years ago), a 1 second quantum was used for the round-robin scheduling algorithm. Later, in BSD 4.2, it did rescheduling every 0.1 seconds, and priority re-computation every second, and these values haven’t changed since.  Round-robin scheduling is done by a timeout mechanism, which informs the clock interrupt driver to call a certain system routine after a specified interval. The subroutine to be called, in this case, causes the rescheduling and then resubmits a timeout to call itself again 0.1 sec later. The priority re-computation is also timed by a subroutine that resubmits a timeout for itself. &lt;br /&gt;
&lt;br /&gt;
The ULE Scheduler was first introduced in FreeBSD 5, however disabled by default in favor of the default 4BSD scheduler. It was not until FreeBSD 7.1 that the ULE scheduler became the new default. The ULE scheduler was an overhaul of the original scheduler, and allowed it support for symmetric multiprocessing (SMP), support for symmetric multithreading (SMT) on multi-core systems, and improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&amp;lt;more to come&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1 = http://www.cim.mcgill.ca/~franco/OpSys-304-427/lecture-notes/node46.html&lt;br /&gt;
2 = http://security.freebsd.org/advisories/FreeBSD-EN-10:02.sched_ule.asc&lt;br /&gt;
3 = McKusick, M. K. and Neville-Neil, G. V. 2004. Thread Scheduling in FreeBSD 5.2. Queue 2, 7 (Oct. 2004), 58-64. DOI= http://doi.acm.org/10.1145/1035594.1035622&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notes: Lots of this is just paraphrasing stuff you guys said in the discussion section. In terms of citations, should it be a superscripted citation next to the fact snippet we used, or should it just be a list of sources at the bottom?&lt;br /&gt;
&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 17:51, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Sources =&lt;br /&gt;
&lt;br /&gt;
[1] http://www.ibm.com/developerworks/linux/library/l-completely-fair-scheduler/index.html&lt;br /&gt;
&lt;br /&gt;
[2] http://www.mjmwired.net/kernel/Documentation/scheduler/sched-nice-design.txt&lt;br /&gt;
&lt;br /&gt;
[3] http://oreilly.com/catalog/linuxkernel/chapter/ch10.html#94726&lt;br /&gt;
&lt;br /&gt;
[4] http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2454</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2454"/>
		<updated>2010-10-07T12:53:49Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Essay Preview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:&lt;br /&gt;
&lt;br /&gt;
See Essay preview section!&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
= Essay Preview =&lt;br /&gt;
&lt;br /&gt;
So just a small, quick question. Are we going to follow a certain standard for citing resources (bibliography &amp;amp; footnotes) to maintain consistency, or do we just stick with what Mike&#039;s presented?--[[User:AbsMechanik|AbsMechanik]] 12:53, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.&lt;br /&gt;
&lt;br /&gt;
Just relocating previous post with suggested intro paragraph:&lt;br /&gt;
&lt;br /&gt;
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers. &lt;br /&gt;
&lt;br /&gt;
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985. &lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 03:48, 7 October 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Lab_3_2010&amp;diff=2346</id>
		<title>COMP 3000 Lab 3 2010</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Lab_3_2010&amp;diff=2346"/>
		<updated>2010-10-05T15:11:58Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* IPC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Please answer all questions below. &lt;br /&gt;
&lt;br /&gt;
Part A is designed to be done in the lab while part B is designed to be done on your own time. Many of the questions require you to compile and run sample programs. These programs can be compiled using either the virtual machine environment you created, your own Linux installation, or your SCS Linux account.&lt;br /&gt;
&lt;br /&gt;
For all questions asking you to modify source code, you should always start with a clean version of the source code and modify that (unless otherwise directed in the question). Don’t continue modifying your solution to a previous question in order to answer the next question.&lt;br /&gt;
&lt;br /&gt;
All programs given in this assignment are written in C. A makefile is provided to compile the core programs given by the assignment—type “make” to compile. If you wish to rename files, you will need to either edit the makefile or run GCC to compile the programs on your own.&lt;br /&gt;
&lt;br /&gt;
All files are available [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3/ here] individually; a ZIP archive is [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3.zip here].&lt;br /&gt;
&lt;br /&gt;
==Part A (Mandatory)==&lt;br /&gt;
&lt;br /&gt;
This part is to be completed in class.&lt;br /&gt;
&lt;br /&gt;
You may add or edit tips after each question; please do not edit the original question, however.&lt;br /&gt;
&lt;br /&gt;
===Processes and Threads===&lt;br /&gt;
&lt;br /&gt;
# The program [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3/threads.c threads.c] is a multithreaded producer/consumer program. Unfortunately it consumes faster than it produces, resulting in an error. Why does it not print the same number every time?&lt;br /&gt;
# The program [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3/passstr.c passstr.c] is a multithreaded program using the &amp;lt;tt&amp;gt;clone&amp;lt;/tt&amp;gt; function call. What is wrong with the way this program blocks, waiting for the string to arrive in the buffer?&lt;br /&gt;
2. The printer_func keeps checking the buffer, before it could get arg, it will never quit automatically. the only chance for maker_func to run is, when printer_func runs out of its quantum, and CPU is scheduled to maker_func. It is a waste of time. A better method of waiting is sleep for 1ms, then check.&lt;br /&gt;
&lt;br /&gt;
===Fork &amp;amp; Exec===&lt;br /&gt;
&lt;br /&gt;
# What is the difference between the &amp;lt;tt&amp;gt;clone&amp;lt;/tt&amp;gt; and the &amp;lt;tt&amp;gt;fork&amp;lt;/tt&amp;gt; function call?&lt;br /&gt;
&lt;br /&gt;
with fork(), you get a new process. and you can use exec( ) to replace the parent process by the child process.&lt;br /&gt;
with clone(), you get a new thread, it shares some property with the process. and you can not replace the process by this thread.&lt;br /&gt;
&lt;br /&gt;
===IPC===&lt;br /&gt;
&lt;br /&gt;
Examine the program given in [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3/wait-signal.c wait-signal.c]. It multiplies two matrices together using the standard trivial algorithm (which also happens to be a n3 algorithm). It spawns off a child process to compute the value of each element in the resulting matrix. The program has a problem, however, in that it fails to pass the resulting values back to the parent process in order to give the right result. In this section, we will examine various methods for passing data between processes.&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;b&amp;gt;Signals&amp;lt;/b&amp;gt;&amp;lt;br /&amp;gt;Signals can be sent to each process running on the system. Signals, however, don’t allow the passing of any data along with the signal. Therefore, they are most useful for triggering actions.&amp;lt;br /&amp;gt;&lt;br /&gt;
## The &amp;lt;tt&amp;gt;kill&amp;lt;/tt&amp;gt; command actually sends signals to processes. What signal does the kill command by default send to a process?&lt;br /&gt;
## Modify the [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3/wait-signal.c wait-signal.c] file to use the signal function to install the signal handler instead of the sigaction function call. You can have it install the child handler alt signal handler instead of the child handler signal handler. What line did you add to install the signal handler to child handler alt?&lt;br /&gt;
## Modify the [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3/wait-signal-1.c wait-signal-1.c] file to ignore the abort signal. What line did you have to add to do this?&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
Note: The file names may have been switched. Try using the wait-signal1.c for part 2 and wait-signal.c for part 3 - --[[User:AbsMechanik|AbsMechanik]] 15:11, 5 October 2010 (UTC)&lt;br /&gt;
# &amp;lt;b&amp;gt;Pipes&amp;lt;/b&amp;gt;&amp;lt;br /&amp;gt;Pipes (also called FIFO’s) allow two processes to communicate through a file handle. One process writes data into a file handle and the other process can then read that data out through a related but different file handle.&lt;br /&gt;
## What happens to file descriptors across an &amp;lt;tt&amp;gt;exec&amp;lt;/tt&amp;gt; call? Write a small program that tests this behavior, i.e. that opens a file, calls execve, and then the new program attempts to read from the previously opened file descriptor. Explain how this program behaves.&lt;br /&gt;
## Compile and run [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3/pipe.c pipe.c]. Notice how data is sent through the pipe by writing to one end of the pipe in the child and reading from the other end of the pipe in the parent. Also notice how the message Finished writing the data! is never displayed on the screen. The problem has to do with the SIGPIPE signal. What is the problem?&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
# &amp;lt;b&amp;gt;Shared Memory&amp;lt;/b&amp;gt;&lt;br /&gt;
## Shared memory regions are controlled by the kernel to prevent other processes from accessing the memory without permission. Like files in Unix, the shared memory regions are given read, write and execute permission. These permissions are specified in the call to shmget. Where in the arguments to shmget are the permissions specified?&lt;br /&gt;
## The permissions must be specified as a value. By reading the manpage of chmod, determine what the permission 0760 means.&lt;br /&gt;
## What number is going to be required in order for two processes owned by the same user to be able to read and write to the shared memory?&lt;br /&gt;
1. &lt;br /&gt;
  1. TERM&lt;br /&gt;
  2. 	signal(SIGCHLD, child_handler); and the definition of child_hander (only one arguments)&lt;br /&gt;
3&lt;br /&gt;
1. the third: shmflg&lt;br /&gt;
2. user can read+write+execute; group can read+write, others cannot do anyting.&lt;br /&gt;
3. 0600&lt;br /&gt;
&lt;br /&gt;
==Part B (Optional)==&lt;br /&gt;
&lt;br /&gt;
The following exercises are optional.&lt;br /&gt;
&lt;br /&gt;
===Processes===&lt;br /&gt;
#From class, you know that the process descriptor contains numerous information about a running process on the system. The task structure in Linux is called struct task struct. By examining the source of the Linux kernel, determine what source file this structure is defined in. The grep command may be useful in locating the correct file.&lt;br /&gt;
# Figure 6.3 (page 213) in your textbook contains a list of common elements found in a process table. Determine at least one variable in the Linux task structure which is related to each element listed in Figure 6.3. You may omit address space and stack.&lt;br /&gt;
&lt;br /&gt;
===Fork &amp;amp; Exec===&lt;br /&gt;
# Examining the flags that can be passed to the clone function call. Choose 5 flags and describe a situation in which each of them would be useful.&amp;lt;br /&amp;gt;Find the portion of the Linux kernel that implements the fork, clone, and vfork system calls for i386 systems. Based upon this code, could Linux instead just have one of these system calls?&amp;lt;br /&amp;gt; If so, which one, and how would you implement userspace “wrappers” that would provide identical functionality for the other two calls?&amp;lt;br /&amp;gt;If not, why are all three necessary? Explain.(For this question, ignore issues of binary compatibility.)&lt;br /&gt;
# File descriptors 0, 1, and 2 are special in Linux, in that they refer to standard in, standard out, and standard error. Does the Linux kernel know they are special? Explain, referring to appropriate parts of the Linux kernel source.&lt;br /&gt;
&lt;br /&gt;
===IPC===&lt;br /&gt;
In this section, you will be modifying the program wait-signal to correctly compute the value of the matrix multiplication.&lt;br /&gt;
# &amp;lt;b&amp;gt;Signals&amp;lt;/b&amp;gt;&amp;lt;br /&amp;gt;Describe in words how you might modify the wait-signal program to correctly pass back the value computed in the child to the parent using only signals. Remember that signals do not allow data to be passed back and forth. Also keep in mind that there are only around 32 signals that can be sent to a process. You do not have to implement your answer, only describe what you would do.&lt;br /&gt;
# &amp;lt;b&amp;gt;Pipes&amp;lt;/b&amp;gt;&amp;lt;br /&amp;gt;Modify the [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3/wait-signal.c wait-signal.c] program to pass the appropriate matrix data back to the parent via a pipe. Remember that you will also have to pass back the x and y locations that the data should be put in. What is your updated main function?&lt;br /&gt;
# &amp;lt;b&amp;gt;Shared Memory&amp;lt;/b&amp;gt;&amp;lt;br /&amp;gt;Modify [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3/wait-signal.c wait-signal.c] to send data back to the main process using shared memory. You will need to use the functions shmget and shmat.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Lab_3_2010&amp;diff=2345</id>
		<title>COMP 3000 Lab 3 2010</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Lab_3_2010&amp;diff=2345"/>
		<updated>2010-10-05T15:11:37Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* IPC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Please answer all questions below. &lt;br /&gt;
&lt;br /&gt;
Part A is designed to be done in the lab while part B is designed to be done on your own time. Many of the questions require you to compile and run sample programs. These programs can be compiled using either the virtual machine environment you created, your own Linux installation, or your SCS Linux account.&lt;br /&gt;
&lt;br /&gt;
For all questions asking you to modify source code, you should always start with a clean version of the source code and modify that (unless otherwise directed in the question). Don’t continue modifying your solution to a previous question in order to answer the next question.&lt;br /&gt;
&lt;br /&gt;
All programs given in this assignment are written in C. A makefile is provided to compile the core programs given by the assignment—type “make” to compile. If you wish to rename files, you will need to either edit the makefile or run GCC to compile the programs on your own.&lt;br /&gt;
&lt;br /&gt;
All files are available [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3/ here] individually; a ZIP archive is [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3.zip here].&lt;br /&gt;
&lt;br /&gt;
==Part A (Mandatory)==&lt;br /&gt;
&lt;br /&gt;
This part is to be completed in class.&lt;br /&gt;
&lt;br /&gt;
You may add or edit tips after each question; please do not edit the original question, however.&lt;br /&gt;
&lt;br /&gt;
===Processes and Threads===&lt;br /&gt;
&lt;br /&gt;
# The program [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3/threads.c threads.c] is a multithreaded producer/consumer program. Unfortunately it consumes faster than it produces, resulting in an error. Why does it not print the same number every time?&lt;br /&gt;
# The program [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3/passstr.c passstr.c] is a multithreaded program using the &amp;lt;tt&amp;gt;clone&amp;lt;/tt&amp;gt; function call. What is wrong with the way this program blocks, waiting for the string to arrive in the buffer?&lt;br /&gt;
2. The printer_func keeps checking the buffer, before it could get arg, it will never quit automatically. the only chance for maker_func to run is, when printer_func runs out of its quantum, and CPU is scheduled to maker_func. It is a waste of time. A better method of waiting is sleep for 1ms, then check.&lt;br /&gt;
&lt;br /&gt;
===Fork &amp;amp; Exec===&lt;br /&gt;
&lt;br /&gt;
# What is the difference between the &amp;lt;tt&amp;gt;clone&amp;lt;/tt&amp;gt; and the &amp;lt;tt&amp;gt;fork&amp;lt;/tt&amp;gt; function call?&lt;br /&gt;
&lt;br /&gt;
with fork(), you get a new process. and you can use exec( ) to replace the parent process by the child process.&lt;br /&gt;
with clone(), you get a new thread, it shares some property with the process. and you can not replace the process by this thread.&lt;br /&gt;
&lt;br /&gt;
===IPC===&lt;br /&gt;
&lt;br /&gt;
Examine the program given in [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3/wait-signal.c wait-signal.c]. It multiplies two matrices together using the standard trivial algorithm (which also happens to be a n3 algorithm). It spawns off a child process to compute the value of each element in the resulting matrix. The program has a problem, however, in that it fails to pass the resulting values back to the parent process in order to give the right result. In this section, we will examine various methods for passing data between processes.&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;b&amp;gt;Signals&amp;lt;/b&amp;gt;&amp;lt;br /&amp;gt;Signals can be sent to each process running on the system. Signals, however, don’t allow the passing of any data along with the signal. Therefore, they are most useful for triggering actions.&amp;lt;br /&amp;gt;&lt;br /&gt;
## The &amp;lt;tt&amp;gt;kill&amp;lt;/tt&amp;gt; command actually sends signals to processes. What signal does the kill command by default send to a process?&lt;br /&gt;
## Modify the [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3/wait-signal.c wait-signal.c] file to use the signal function to install the signal handler instead of the sigaction function call. You can have it install the child handler alt signal handler instead of the child handler signal handler. What line did you add to install the signal handler to child handler alt?&lt;br /&gt;
## Modify the [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3/wait-signal-1.c wait-signal-1.c] file to ignore the abort signal. What line did you have to add to do this?&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
Note: The file names may have been switched. Try using the wait-signal1.c for part 1 and wait-signal.c for part 2 - --[[User:AbsMechanik|AbsMechanik]] 15:11, 5 October 2010 (UTC)&lt;br /&gt;
# &amp;lt;b&amp;gt;Pipes&amp;lt;/b&amp;gt;&amp;lt;br /&amp;gt;Pipes (also called FIFO’s) allow two processes to communicate through a file handle. One process writes data into a file handle and the other process can then read that data out through a related but different file handle.&lt;br /&gt;
## What happens to file descriptors across an &amp;lt;tt&amp;gt;exec&amp;lt;/tt&amp;gt; call? Write a small program that tests this behavior, i.e. that opens a file, calls execve, and then the new program attempts to read from the previously opened file descriptor. Explain how this program behaves.&lt;br /&gt;
## Compile and run [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3/pipe.c pipe.c]. Notice how data is sent through the pipe by writing to one end of the pipe in the child and reading from the other end of the pipe in the parent. Also notice how the message Finished writing the data! is never displayed on the screen. The problem has to do with the SIGPIPE signal. What is the problem?&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
# &amp;lt;b&amp;gt;Shared Memory&amp;lt;/b&amp;gt;&lt;br /&gt;
## Shared memory regions are controlled by the kernel to prevent other processes from accessing the memory without permission. Like files in Unix, the shared memory regions are given read, write and execute permission. These permissions are specified in the call to shmget. Where in the arguments to shmget are the permissions specified?&lt;br /&gt;
## The permissions must be specified as a value. By reading the manpage of chmod, determine what the permission 0760 means.&lt;br /&gt;
## What number is going to be required in order for two processes owned by the same user to be able to read and write to the shared memory?&lt;br /&gt;
1. &lt;br /&gt;
  1. TERM&lt;br /&gt;
  2. 	signal(SIGCHLD, child_handler); and the definition of child_hander (only one arguments)&lt;br /&gt;
3&lt;br /&gt;
1. the third: shmflg&lt;br /&gt;
2. user can read+write+execute; group can read+write, others cannot do anyting.&lt;br /&gt;
3. 0600&lt;br /&gt;
&lt;br /&gt;
==Part B (Optional)==&lt;br /&gt;
&lt;br /&gt;
The following exercises are optional.&lt;br /&gt;
&lt;br /&gt;
===Processes===&lt;br /&gt;
#From class, you know that the process descriptor contains numerous information about a running process on the system. The task structure in Linux is called struct task struct. By examining the source of the Linux kernel, determine what source file this structure is defined in. The grep command may be useful in locating the correct file.&lt;br /&gt;
# Figure 6.3 (page 213) in your textbook contains a list of common elements found in a process table. Determine at least one variable in the Linux task structure which is related to each element listed in Figure 6.3. You may omit address space and stack.&lt;br /&gt;
&lt;br /&gt;
===Fork &amp;amp; Exec===&lt;br /&gt;
# Examining the flags that can be passed to the clone function call. Choose 5 flags and describe a situation in which each of them would be useful.&amp;lt;br /&amp;gt;Find the portion of the Linux kernel that implements the fork, clone, and vfork system calls for i386 systems. Based upon this code, could Linux instead just have one of these system calls?&amp;lt;br /&amp;gt; If so, which one, and how would you implement userspace “wrappers” that would provide identical functionality for the other two calls?&amp;lt;br /&amp;gt;If not, why are all three necessary? Explain.(For this question, ignore issues of binary compatibility.)&lt;br /&gt;
# File descriptors 0, 1, and 2 are special in Linux, in that they refer to standard in, standard out, and standard error. Does the Linux kernel know they are special? Explain, referring to appropriate parts of the Linux kernel source.&lt;br /&gt;
&lt;br /&gt;
===IPC===&lt;br /&gt;
In this section, you will be modifying the program wait-signal to correctly compute the value of the matrix multiplication.&lt;br /&gt;
# &amp;lt;b&amp;gt;Signals&amp;lt;/b&amp;gt;&amp;lt;br /&amp;gt;Describe in words how you might modify the wait-signal program to correctly pass back the value computed in the child to the parent using only signals. Remember that signals do not allow data to be passed back and forth. Also keep in mind that there are only around 32 signals that can be sent to a process. You do not have to implement your answer, only describe what you would do.&lt;br /&gt;
# &amp;lt;b&amp;gt;Pipes&amp;lt;/b&amp;gt;&amp;lt;br /&amp;gt;Modify the [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3/wait-signal.c wait-signal.c] program to pass the appropriate matrix data back to the parent via a pipe. Remember that you will also have to pass back the x and y locations that the data should be put in. What is your updated main function?&lt;br /&gt;
# &amp;lt;b&amp;gt;Shared Memory&amp;lt;/b&amp;gt;&amp;lt;br /&amp;gt;Modify [http://homeostasis.scs.carleton.ca/~soma/os-2010f/lab3/wait-signal.c wait-signal.c] to send data back to the main process using shared memory. You will need to use the functions shmget and shmat.&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2325</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2325"/>
		<updated>2010-10-04T20:08:39Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?&lt;br /&gt;
--[[User:Sschnei1|Sschnei1]] 09:38, 5 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman&#039;s terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --[[User:AbsMechanik|AbsMechanik]] 20:08, 4 October 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2302</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_5&amp;diff=2302"/>
		<updated>2010-10-04T02:32:42Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Discussion=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues. &lt;br /&gt;
&lt;br /&gt;
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.&lt;br /&gt;
&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hi Mike, &lt;br /&gt;
Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!&lt;br /&gt;
&lt;br /&gt;
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.&lt;br /&gt;
&lt;br /&gt;
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads. &lt;br /&gt;
&lt;br /&gt;
More information can be found here:&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/230574/&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
http://lwn.net/Articles/240474/&lt;br /&gt;
&lt;br /&gt;
[[User:Sschnei1|Sschnei1]] 16:33, 3 October 2010 (UTC) or Sebastian&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&amp;amp;key2=8828216821&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=104236685&amp;amp;CFTOKEN=84340156&lt;br /&gt;
&lt;br /&gt;
Here are the highlights from the article:&lt;br /&gt;
&lt;br /&gt;
General FreeBSD knowledge:&lt;br /&gt;
      1. requires a scheduler to be selected at the time the kernel is built.&lt;br /&gt;
      2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated.&lt;br /&gt;
      3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.&lt;br /&gt;
&lt;br /&gt;
Original FreeBSD Scheduler:&lt;br /&gt;
      1.  threads assigned a scheduling priority which determines which &#039;run queue&#039; the thread is placed in.&lt;br /&gt;
      2.  the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds.&lt;br /&gt;
      3.  once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity.&lt;br /&gt;
      4.  the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.&lt;br /&gt;
&lt;br /&gt;
ULE Scheduler&lt;br /&gt;
- overhaul of Original BSD scheduler to:&lt;br /&gt;
       1. support symmetric multiprocessing (SMP)&lt;br /&gt;
       2. support symmetric multithreading (SMT) on multi-core systems&lt;br /&gt;
       3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
Some interesting pieces about the Linux scheduler include:&lt;br /&gt;
      1. The Jan 2002 version included O(1) algorithm as well as additions for SMP.&lt;br /&gt;
      2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue.&lt;br /&gt;
      3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:38, 3 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/).&lt;br /&gt;
--[[User:AbsMechanik|AbsMechanik]] 02:32, 4 October 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=2299</id>
		<title>COMP 3000 Essay 1 2010 Question 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_5&amp;diff=2299"/>
		<updated>2010-10-03T20:33:27Z</updated>

		<summary type="html">&lt;p&gt;AbsMechanik: /* Resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
&lt;br /&gt;
Compare and contrast the evolution of the default BSD/FreeBSD and Linux schedulers.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
&lt;br /&gt;
I found some resources, which might be useful to answer this question. As far as I know, FreeBSD uses a Multilevel feeback queue and Linux uses in the current version the completly fair scheduler.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Some text about FreeBSD-scheduling http://www.informit.com/articles/article.aspx?p=366888&amp;amp;seqNum=4&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-ULE Thread Scheduler: http://www.scribd.com/doc/3299978/ULE-Thread-Scheduler-for-FreeBSD&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Completly Fair Scheduler: http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
-Sebastian&lt;br /&gt;
&lt;br /&gt;
Also found a nice link with regards to the new Linux Scheduler for those interested:&lt;br /&gt;
http://www.ibm.com/developerworks/linux/library/l-scheduler/&lt;br /&gt;
&amp;lt;br /&amp;gt;It is also referred to as the O(1) scheduler in algorithmic terms (CFS is O(log(n)) scheduler). Both have been in development by Ingo Molnár.&lt;br /&gt;
-Abhinav&lt;/div&gt;</summary>
		<author><name>AbsMechanik</name></author>
	</entry>
</feed>