Operating Systems 2014F: Tutorial 3
In this tutorial you will be playing with the scheduling simulators from the textbook. Please do as many of the following homework exercises as you can during the tutorial time. Note that you should expect to do simple scheduling calculations on the midterm, so now is your time to learn how to do them. They should also help you better understand how schedulers work so you can answer the conceptual questions in the assignment.
The scheduling simulators are all Python programs that can be run from the Linux command line (or other Python execution environment). There is a README accompanying each that explains how to run them, but the basic idea is that you can run them to create a scheduling problem and then (with the "-c" option) they give the solution to problem - thus you can check whether your answers are right!
The Python code and links to associated chapters are on the textbook's homework page. (local backup)
Chapter 7, Scheduling: Introduction
For these questions use scheduler.py (type "tar xzf HW-Scheduler.tgz" to extract).
- Compute the response time and turnaround time when running three jobs of length 200 with the SJF and FIFO schedulers.
- Now do the same but with jobs of different lengths: 100, 200, and 300.
- Now do the same, but also with the RR scheduler and a time-slice of 1.
Chapter 8, Scheduling: The Multi-Level Feedback Queue
For these questions use mlfq.py.
- Run a few randomly-generated problems with just two jobs and two queues; compute the MLFQ execution trace for each. Make your life easier by limiting the length of each job and turning off I/Os.
- How would you run the scheduler to reproduce each of the examples in the chapter?
- How would you configure the scheduler parameters to behave just like a round-robin scheduler?
- Craft a workload with two jobs and scheduler parameters so that one job takes advantage of the older Rules 4a and 4b (turned on with the -S flag) to game the scheduler and obtain 99% of the CPU over a particular time interval.
- Given a system with a quantum length of 10 ms in its highest queue, how often would you have to boost jobs back to the highest priority level (with the -B flag) in order to guarantee that a single long-running (and potentially-starving) job gets at least 5% of the CPU?
- One question that arises in scheduling is which end of a queue to add a job that just finished I/O; the -I flag changes this behavior for this scheduling simulator. Play around with some workloads and see if you can see the effect of this flag.
For these questions use lottery.py.
- Compute the solutions for simulations with 3 jobs and random seeds of 1, 2, and 3.
- Now run with two specific jobs: each of length 10, but one (job 0) with just 1 ticket and the other (job 1) with 100 (e.g., -l 10:1,10:100). What happens when the number of tickets is so imbalanced? Will job 0 ever run before job 1 completes? How often? In general, what does such a ticket imbalance do to the behavior of lottery scheduling?
- When running with two jobs of length 100 and equal ticket allocations of 100 (-l 100:100,100:100), how unfair is the scheduler? Run with some different random seeds to determine the (probabilistic) answer; let unfairness be determined by how much earlier one job finishes than the other.
- How does your answer to the previous question change as the quantum size (-q) gets larger?
- Can you make a version of the graph that is found in the chapter? What else would be worth exploring? How would the graph look with a stride scheduler?