Talk:COMP 3000 Essay 1 2010 Question 5: Difference between revisions
Mike Preston (talk | contribs) |
Mike Preston (talk | contribs) |
||
Line 73: | Line 73: | ||
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom: | So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom: | ||
See Essay preview section! | |||
--[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC) | --[[User:Mike Preston|Mike Preston]] 02:54, 6 October 2010 (UTC) |
Revision as of 03:50, 7 October 2010
Discussion
From what I have been reading the early versions of the Linux scheduler had a very hard time managing high numbers of tasks at the same time. Although I do not how it ran, the scheduler algorithm operated at O(n) time. As a result as more tasks were added, the scheduler would become slower. In addition to this, a single data structure was used to manage all processors of a system which created a problem with managing cached memory between processors. The Linux 2.6 scheduler was built to resolve the task management issues in O(1), constant, time as well as addressing the multiprocessing issues.
It appears as though BSD also had issues with task management however for BSD this was due to a locking mechanism that only allowed one process at a time to operate in kernel mode. FreeBSD 5 changed this locking mechanism to allow multiple processes the ability to run in kernel mode at the same time advancing the success of symmetric multiprocessing.
--Mike Preston 18:38, 3 October 2010 (UTC)
Hi Mike, Can you give any names for the schedulers you are talking about? I think it is easier to distinguish by names and not by the algorithm. It is just a suggestion!
The O(1) scheduler was replaced in the linux kernel 2.6.23 with the CFS (completly fair scheduler) which runs in O(log n). Also, the schedulers before CFS were based on a Multilevel feedback queue algorithm, which was changed in 2.6.23. It is not based on a queue as most schedulers, but on a red-black-tree to implement a timeline to make future predictions. The aim of CFS is to maximize CPU utilization and maximizing the performance at the same time.
In FreeBSD 5, the ULE Scheduler was introduced but disabled by default in the early versions, which eventually changed later on. ULE has better support for SMP and SMT, thus allowing it to improve overall performance in uniprocessors and multiprocessors. And it has a constant execution time, regardless of the amount of threads.
More information can be found here:
http://lwn.net/Articles/230574/
http://lwn.net/Articles/240474/
Sschnei1 16:33, 3 October 2010 (UTC) or Sebastian
Here is another article which essentially backs up what you are saying Sebastian: http://delivery.acm.org/10.1145/1040000/1035622/p58-mckusick.pdf?key1=1035622&key2=8828216821&coll=GUIDE&dl=GUIDE&CFID=104236685&CFTOKEN=84340156
Here are the highlights from the article:
General FreeBSD knowledge:
1. requires a scheduler to be selected at the time the kernel is built. 2. all calls to scheduling code are resolved at compile time...this means that the overhead of indirect function calls for scheduling decisions is eliminated. 3. kernels up to FreeBSD 5.1 used this scheduler, but from 5.2 onward the ULE scheduler used.
Original FreeBSD Scheduler:
1. threads assigned a scheduling priority which determines which 'run queue' the thread is placed in. 2. the system scans the run queues in order of highest priority to lowest priority and executes the first thread of the first non-empty run queue it finds. 3. once a non-empty queue is found the system spends an equal time slice on each thread in the run queue. This time slice is 0.1 seconds and this value has not changed in over 20 years. A shorter time slice would cause overhead due to switching between threads too often thus reducing productivity. 4. the article then provides detailed formulae on how to determine thread priority which is out of our scope for this project.
ULE Scheduler - overhaul of Original BSD scheduler to:
1. support symmetric multiprocessing (SMP) 2. support symmetric multithreading (SMT) on multi-core systems 3. improve the scheduler algorithm to ensure execution is no longer limited by the number of threads in the system.
Here is another article which gives some great overview of a bunch of versions/the evolution of different schedulers: https://www.usenix.org/events/bsdcon03/tech/full_papers/roberson/roberson.pdf
Some interesting pieces about the Linux scheduler include:
1. The Jan 2002 version included O(1) algorithm as well as additions for SMP. 2. Scheduler uses 2 priority queue arrays to achieve fairness. Does this by giving each thread a time slice and a priority and executes each thread in order of highest priority to lowest. Threads that exhaust their time slice are moved to the exhausted queue and threads with remaining time slices are kept in the active queue. 3. Time slices are DYNAMIC, larger time slices are given to higher priority tasks, smaller slices to lower priority tasks.
I thought the dynamic time slice piece was of particular interest as you would think this would lead to starvation situations if the priority was high enough on one or multiple threads.
--Mike Preston 18:38, 3 October 2010 (UTC)
This is essentially a summarized version of the aforementioned information regarding CFS (http://www.ibm.com/developerworks/linux/library/l-scheduler/). --AbsMechanik 02:32, 4 October 2010 (UTC)
I have seen this website and thought it is useful. Do you think this is enough on research to write an essay or are we going to do some more research?
--Sschnei1 09:38, 5 October 2010 (UTC)
I also stumbled upon this website: http://my.opera.com/blu3c4t/blog/show.dml/1531517. It explains a lot of stuff in layman's terms (I had a lot of trouble finding more info on the default BSD scheduler, but this link has some brief description included in it). I think we have enough resources/research done. We should start to formulate these results into an answer now. --AbsMechanik 20:08, 4 October 2010 (UTC)
So I thought I would take a first crack at an intro for our article, please tell me what you think of the following. Note that I have included the resource used as a footnote, the placement of which I indicate with the number 1, and I just tacked the details of the footnote on at the bottom:
See Essay preview section!
--Mike Preston 02:54, 6 October 2010 (UTC)
Essay Preview
Maybe we should write the essay templates/prototypes here, to keep overview of the discussion part.
Just relocating previous post with suggested intro paragraph:
One of the most difficult problems that operating systems must handle is process management. In order to ensure that a system will run efficiently, processes must be maintained, prioritized, categorized and communicated with all without experiencing critical errors such as race conditions or process starvation. A critical component in the management of such issues is the operating system’s scheduler. The goal of a scheduler is to ensure that all processes of a computer system get access to the system resources they require as efficiently as possible while maintaining fairness for each process, limiting CPU wait times, and maximizing the throughput of the system.1 As computer hardware has increased in complexity, for example multiple core CPUs, schedulers of operating systems have similarly evolved to handle these additional challenges. In this article we will compare and contrast the evolution of two such schedulers; the default BSD/FreeBSD and Linux schedulers.
1 Jensen, Douglas E., C. Douglass Locke and Hideyuki Tokuda, A Time-Driven Scheduling Model for Real-Time Operating Systems, Carnegie-Mellon University, 1985.
--Mike Preston 03:48, 7 October 2010 (UTC)