Operating Systems 2014F Lecture 5

From Soma-notes
Jump to navigation Jump to search


previous class:

  • Previous lecture we learned that its very difficult to write a scheduler
  • tradeoff between response time and turnaround time
  • FIFO, SJF, RR, MLFQ - an attempt to balance turnaround time and response time by approximating SJF

Why is it so hard to write a scheduler? The amount of information the scheduler knows about a process is very limited on a commodity operating system. Some applications are different - ‘real time OSs’ What we want the scheduler to do is very complicated, and is often conflicting - we want it to be responsive/interactive, but we also want our jobs to get done. We have to deal with a lot of corner cases - what if a long running background process suddenly becomes highly interactive? What if an application developer tries to game the scheduler? What if we have processes that should have arbitrarily high priority for no discernable reason? These corner cases lead to complicated and inelegant solutions like priority boosting and accounting.

in this class:

we will examine an alternative approach to scheduling which optimizes for proportionality, rather than turnaround or response time. we will discuss the concerns with scheduling processes in a multi-processor environment.

Proportional-Share Scheduling

  • Let’s forget turnaround and response time. What if we wanted to optimize for proportionality (i.e, ensure that each process receives a proportional share of the cpu, where the exact proportion is defined by us).

Lottery Scheduling is one way to accomplish this.

Lottery Scheduling

Basic unit is the ticket. Each process has a certain number of tickets. The amount of tickets represent the share of the resources the process should have. Let’s have two processes - A and B. We want A to have 75% of the resources, and B to have 25%. How can we implement this? Scheduler generates a random number n from 0 to 99. Tickets 0 to 74 belong to process A, tickets 75 to 99 belong to process B.

this is easily implemented as a singly linked list. Image three process, A, B and C with 100 tickets each (draw three circles on board). Counter goes through and increments by the number of tickets at each node. When it goes above the ticket number we run that process. design consideration - what if A and B have 1 ticket, and C has 100? Then we should sort the list so that we have the highest number of tickets first, etc. This reduces the number of checks we have to do.

Lottery Scheduling - Ticket Mechanisms

This abstraction of a ticket is more useful then it may seem.

  • ticket currency - process/user can allocate tickets to subjobs, perhaps in a different ratio (or currency)
  • ticket transfer - I can give my tickets to another process. Imagine a client/server setting, where the client hands off a job to the server, the transfers their tickets to the server so it can finish the job.
  • ticket inflation - process can temporarily raise or lower the number of tickets it owns. Note this is only possible in a non-competitive scenario.

Okay, great. So we have all these tickets, but how do we assign them? Potential solutions: give each user an equal share of tickets, have programs assign themselves a certain number of tickets, etc. Whatever we do, there really isn’t a good way to assign tickets. It can work in some specialized cases, but probably won’t be the best for a general purpose OS.

Deterministic Lottery Scheduling

Possible problems with using a non-deterministic scheduler - though on average the system will probabilistically reach the run-time proportions we want, over the short term it may randomly bias towards one process or user However, notice that we can transform probabilistic lottery scheduling into deterministic lottery scheduling pretty easily. Stride scheduling - each process has a stride, which is inverse in proportion to the number of tickets it has (A -> 100, B -> 50, C -> 250 devide by 10,000 A -> 100, B -> 200, C -> 40). Every time a process runs, increment a counter (the pass value) by its stride. Choose the process which has the lowest pass value (if two are equal then pick between them in some arbitrary way)

Why pick randomness or determinism?

Why use randomness? 1. random scheduling does not need to remember state (compare to the amount of state that is kept in an MLFQ). 2. randomness can avoid strange corner cases with weird behaviour (because it doesn’t remember state) 3. simple to implement 4. calculating which process should run next is very quick - O(1) Why not use randomness? Can’t guarantee that processes will have equal share over the short term - only that over the long term it will tend towards the proportions we want. Typically, this solution isn’t used because of how difficult it is to figure out how to assign tickets. But one example where it can be very well used is something like a VM environment where we want 3 / 4 to go to the host and 1 / 4 to go to the VM.

Multiprocessor Scheduling

All the algorithms we have previously looked at assume that there is one processor sharing the resources. In most modern computers there are more likely two, four or even more. Important concept in a multiprocessor environment - caches and memory hierarchy. Processors keep a local cache of frequently accessed data.

  • temporal locality - data that has been accessed in the past is likely to be accessed again in the future. Think of a loop or an in-scope variable.
  • spatial locality - data that is near other data is likely to be accessed - think of an array.

These caches impact scheduling in two important ways. There’s an entire field of research on how to keep these caches synchronized across different processors called cache coherence. It is difficult and requires a lot of locking. Also there is cache affinity. Processes build up state on a processor (in the caches, in the TLBs (i.e, cache for memory) etc). Moving them between processors frequently can decrease performance.

First Solution - Single Queue Multiprocessor Scheduling (SQMS)

Have a single queue of jobs. Then use whatever other algorithm we want to pick the best one (or two) jobs to run next when a processor is free One advantage: very conceptually simple

  • Problem 1. is not scalable. The single queue is a shared data structure and so must be locked every time it is accessed. As the number of processors increases it will be locked more and more frequently until it is basically locked all the time.
  • Problem 2 is cache affinity. Imagine we have five jobs (A, B, C, D, and E) and four processors. (draw the sitch and explain how it is bad).

Can solve problem 2 by having affinity - have four of the jobs stay on their processors and have only one job migrating.

Second Solution - Multi Queue Multiprocessor Scheduling (MQMS) Each processor has its own queue. When a new job comes into the system it is put on exactly one queue. Then each queue is scheduled independently. advantage - scalable with the number of processors (b/c each one has its own queue). Also inherently provides cache affinity since jobs stay on the same processor.

However, this introduces a new problem - load imbalance. Let’s say two queue’s with two jobs each. One queue has one job finish - now the other job gets all that processing time. If that job finishes to, then a cpu is sitting empty. The general solution to this is migration. If one queue is empty then its easy - just move one job to that cpu. However, its trickier for other cases. If cpu A has one job and cpu B has two jobs what do we do? Best solution is probably continuous migration - i.e keep moving one job between the two queues.

In the world of linux schedulers, both approaches (SQMS and MQMS) are used to multiprocessor environments.


Proportional share scheduling

  • Advantages generally: “fair” solution - gives each process the proportion of the system that it is assigned to have.
  • Disadvantage generally: leaves open the problem of ticket assignment. Doesn’t mesh well with I/O.

Lottery Scheduling

  • Advantages: adv. simple, quick, no state

Stride Scheduling

  • Advantages: deterministic solution in all cases


  • Advantages: conceptually simple.
  • Disadvantages: not scalable and no cache affinity


  • Advantages: scales well and has inherent cache affinity
  • Disadvantages: load imbalance problem

The Takeaway Takeaway from all our lectures on scheduling - scheduling is a very difficult problem. It is full of all sorts of different tradeoffs.