Operating Systems 2014F: Assignment 3: Difference between revisions
No edit summary |
|||
Line 17: | Line 17: | ||
==Solutions== | ==Solutions== | ||
# SJF has the same turnaround time as FIFO when all of the jobs are the same size. | # SJF has the same turnaround time as FIFO when all of the jobs are the same size or the jobs monotonically increase in size (e.g., the shortest job arrives first, then the 2nd shortest job, then the 3rd and so on). | ||
# | # SJF delivers the same response times as RR when the scheduling quantum of RR is larger than the largest job to be serviced (so there is no preemption in RR) and when the jobs arrive in order of increasing job size. | ||
# | # As job lengths increase average response time goes up. If the jobs are sorted in increasing job size order, then the n-th job's response time is equal to the sum of the execution time of all the previous n-1 jobs plus that of the current job. Increasing the size of any job thus increases the response time for it and all larger jobs. | ||
# | # With RR, as quantum lengths increase response time increases because the time a process must wait for its next time slice (in the worst case) is proportional to the number of other processes and the maximum amount of time they can execute. Thus on a system with n processes and a quantum size of Q a process will have to wait for Q(n-1) to respond to a request (in the worst case). | ||
# If you define efficiency as the ratio between the amount of CPU time spent on OS services versus time spent running userspace programs, preemptive schedulers are always less efficient because they switch between processes more often and hence run the scheduler and do context switches more often. Time spent running the scheduler and doing context switches is pure overhead. If you define efficiency in terms of response time - how long it takes the system to respond to input - preemptive schedulers are always more efficient than non-preemptive schedulers because preemptive schedulers can quickly switch to the process/thread that needs to process the input. | # If you define efficiency as the ratio between the amount of CPU time spent on OS services versus time spent running userspace programs, preemptive schedulers are always less efficient because they switch between processes more often and hence run the scheduler and do context switches more often. Time spent running the scheduler and doing context switches is pure overhead. If you define efficiency in terms of response time - how long it takes the system to respond to input - preemptive schedulers are always more efficient than non-preemptive schedulers because preemptive schedulers can quickly switch to the process/thread that needs to process the input. | ||
# Standard OS schedulers are not fair because they give more resources more quickly to some processes over others. Fairness in scheduling is normally defined as giving equal shares of CPU time to each process. With prioritization we have unfairness due to policy: some processes are marked as being more important than others, thus they get more resources in a more timely fashion. Schedulers are also unfair in that they tend to prioritize interactive processes over other processes that have the same priority level. We want schedulers to be unfair because some processes should be prioritized: when I am playing a game, I do not want a background virus check to cause the game to slow down. | # Standard OS schedulers are not fair because they give more resources more quickly to some processes over others. Fairness in scheduling is normally defined as giving equal shares of CPU time to each process. With prioritization we have unfairness due to policy: some processes are marked as being more important than others, thus they get more resources in a more timely fashion. Schedulers are also unfair in that they tend to prioritize interactive processes over other processes that have the same priority level. We want schedulers to be unfair because some processes should be prioritized: when I am playing a game, I do not want a background virus check to cause the game to slow down. | ||
# MLFQ-type schedulers allocate resources on a per-process basis. On a system that has only one user this isn't a factor; on a system with multiple users, however, it can be a problem because any user can increase their share of resources simply by running more processes. Fair-share schedulers can allocate a fixed proportion of resources to each user, no matter how many processes they choose to run. Fair-share schedulers have traditionally not been used for CPU time because of their greater complexity (because they must keep track of process groups rather than just processes); however the current Linux schedule, [http://en.wikipedia.org/wiki/Completely_Fair_Scheduler CFS], is a weighted fair share scheduler that does special accounting for blocked processes. | # MLFQ-type schedulers allocate resources on a per-process basis. On a system that has only one user this isn't a factor; on a system with multiple users, however, it can be a problem because any user can increase their share of resources simply by running more processes. Fair-share schedulers can allocate a fixed proportion of resources to each user, no matter how many processes they choose to run. Fair-share schedulers have traditionally not been used for CPU time because of their greater complexity (because they must keep track of process groups rather than just processes); however the current Linux schedule, [http://en.wikipedia.org/wiki/Completely_Fair_Scheduler CFS], is a weighted fair share scheduler that does special accounting for blocked processes. |
Latest revision as of 18:45, 8 October 2014
Please submit the answers to the following questions via CULearn by midnight on Wednesday, September 24, 2014. There 10 points in 7 questions.
Submit your answers as a single text file named "<username>-comp3000-assign3.txt" (where username is your MyCarletonOne username). The first four lines of this file should be "COMP 3000 Assignment 3", your name, student number, and the date of submission. You may wish to format your answers in Markdown to improve their appearance.
No other formats will be accepted. Submitting in another format will likely result in your assignment not being graded and you receiving no marks for this assignment. In particular do not submit an MS Word or OpenOffice file as your answers document!
Don't forget to include what outside resources you used to complete each of your answers, including other students, man pages, and web resources. You do not need to list help from the instructor, TA, or information found in the textbook.
Questions
- [1] For what types of workloads does SJF deliver the same turnaround times as FIFO?
- [1] For what types of workloads and quantum lengths does SJF deliver the same response times as RR?
- [1] What happens to response time with SJF as job lengths increase?
- [2] What happens to response time with RR as quantum lengths increase? Explain in words and write an equation that gives the worst-case response time, given N jobs.
- [2] "Preemptive schedulers are always less efficient than non-preemptive schedulers." Explain how this statement can be true if you define efficiency one way and how it is false if you define efficiency another way.
- [2] Explain two ways in which standard OS schedulers (such as those in Linux or Windows) are not fair to processes and why this is a good thing!
- [1] MLFQ-type schedulers are more common in systems with single users while systems that support multiple users tend to use more proportional share-type schedulers. Why might this be?
Solutions
- SJF has the same turnaround time as FIFO when all of the jobs are the same size or the jobs monotonically increase in size (e.g., the shortest job arrives first, then the 2nd shortest job, then the 3rd and so on).
- SJF delivers the same response times as RR when the scheduling quantum of RR is larger than the largest job to be serviced (so there is no preemption in RR) and when the jobs arrive in order of increasing job size.
- As job lengths increase average response time goes up. If the jobs are sorted in increasing job size order, then the n-th job's response time is equal to the sum of the execution time of all the previous n-1 jobs plus that of the current job. Increasing the size of any job thus increases the response time for it and all larger jobs.
- With RR, as quantum lengths increase response time increases because the time a process must wait for its next time slice (in the worst case) is proportional to the number of other processes and the maximum amount of time they can execute. Thus on a system with n processes and a quantum size of Q a process will have to wait for Q(n-1) to respond to a request (in the worst case).
- If you define efficiency as the ratio between the amount of CPU time spent on OS services versus time spent running userspace programs, preemptive schedulers are always less efficient because they switch between processes more often and hence run the scheduler and do context switches more often. Time spent running the scheduler and doing context switches is pure overhead. If you define efficiency in terms of response time - how long it takes the system to respond to input - preemptive schedulers are always more efficient than non-preemptive schedulers because preemptive schedulers can quickly switch to the process/thread that needs to process the input.
- Standard OS schedulers are not fair because they give more resources more quickly to some processes over others. Fairness in scheduling is normally defined as giving equal shares of CPU time to each process. With prioritization we have unfairness due to policy: some processes are marked as being more important than others, thus they get more resources in a more timely fashion. Schedulers are also unfair in that they tend to prioritize interactive processes over other processes that have the same priority level. We want schedulers to be unfair because some processes should be prioritized: when I am playing a game, I do not want a background virus check to cause the game to slow down.
- MLFQ-type schedulers allocate resources on a per-process basis. On a system that has only one user this isn't a factor; on a system with multiple users, however, it can be a problem because any user can increase their share of resources simply by running more processes. Fair-share schedulers can allocate a fixed proportion of resources to each user, no matter how many processes they choose to run. Fair-share schedulers have traditionally not been used for CPU time because of their greater complexity (because they must keep track of process groups rather than just processes); however the current Linux schedule, CFS, is a weighted fair share scheduler that does special accounting for blocked processes.