From Soma-notes
Jump to navigation Jump to search

These notes have not yet been reviewed for correctness.


Last class, we began talking about turning the machine we have into the machine we want.

What are some properties of the machine that we want?

    - usable / accessible
    - stable / reliable
    - functional (access to underlying resources)
    - efficient
    - customizable
    - secure
    - multitasking
    - portable


    - If you have a computer that doesn't let you access the hardware via an input device (eg: keyboard, mouse, etc..), it isn't very functional.
    - For portability, do you really want to have to rewrite applications to support slight variations in the hardware such as different size hard disks and different amounts of RAM?
    - Multitasking is slightly different than efficiency. (needs editing and expanding, very unclear)

Operating systems don't do all of these perfectly, but they tend to do a lot of these at least acceptably.

If you look at the introduction to the text book, it talks about various types of operating systems. Some of the operating systems we know about are: Linux, Windows, MacOSX, VXWorks, QNX, MS-DOS, Soliaris/xBSD, OS/2, BeOS, VMS, MVS, OS/370, AIX, etc.

Linux isn't a variety of different operating systems to the same degree as the different versions of windows, as most linuxs' share some components, where as windows tends not to.

Of the list above, most of them are modern operating systems. To be a "modern" OS, there are two major qualities: Does it have protected memory, and does it have pre-emptive multitasking? (MS-DOS wouldn't apply to this list because it doesn't constitute as a modern OS.)

Protected Memory

What is protected memory?

Student: A situation where each program and the operating system has its own memory, and the OS prevents other programs from writing to another program's memory.

Dr. Somayaji: Access mechanisms to avoid having one program overwrite another program's memory.

This lets you have a situation where if one program crashes, you can just restart it. Damage due to memory overwrites is limited to one program.

Preemptive Multi-tasking

Preemptive Multitasking is a way to have more than one program run at a time. Older machines were known as batch machines and their operating systems were batch operating systems. These machines ran tasks that took a long time to finish execution, so the tasks were placed in a queue and run one at a time in sequence. These were typically things such things as payroll and accounts receivable for large business. Usually these would be left to run overnight and output the result of the tasks either on magnetic tape or a stack of printouts, which would be returned to the user in the morning.

What is the difference between preemptive multitasking and cooperative multitasking?

    - Preemptive multitasking: the OS enforces time sharing.
    - Co-operative multitasking: each program lets others run.


    If you look at MS-DOS, there are batch files. These are just a sequence of commands to run. It runs them and then returns when done. (cooperative)
    If you want to run a GUI, however, a batch system is unlikely to be what you want, as a GUI environment tends to be interactive. (preemptive)

With the big iron in the old days they had big computers that would be sitting mostly idle, except when running the batch jobs. The idea of time sharing came along around then.

Structure of a computer

Stored program architecture 1.png Stored Program architecture

The stored program architecture with today's computers is a bit of a fiction.

The things the microprocessor does are significantly faster than the RAM storage. Modern computers have to wait for data from RAM. However this time is dwarfed by the time spent waiting for I/O. This is because I/O devices tend to be mechanical: printers, hard disks, people at keyboards.

This helped cause the idea "what if we had multiple users and let them share the CPU" to come about. This is time-sharing. On modern computers, we do this too, but instead of sharing with multiple users we run multiple programs for a single user -- multi-tasking.

In older systems such as Win3.1 and MacOS 9, this was co-operative multi-tasking. When things started running, they'd hog the CPU until they decided they were ready to give up the CPU.

There used to be a great feature in the Mac in the old days where if you held down the mouse button, no networking would happen. This was because the program running at the time was hogging the CPU when the mouse button was pressed. In pre-emptive multitasking, you get booted out periodically so that the system can spend time paying attention to the network, to do animations, or to let other applications run. It spends a millisecond here, a millisecond there, etc. Instead of actually running simultaneously, they're periodically running, but they seem to run simultaneously to the end-user.

Sometimes you have 2 or more CPUs, but you have more than 2 things going on...

Processes and the Kernel

Processes are fundamentally the things that get multitasked and protected. It is the abstraction of a running program. This is what makes an operating system modern. In the old days, you had one memory space and the OS and its applications were all sharing the CPU and memory. Now, with a process model, there are barriers all over the place, and more importantly, something/someone in charge governing the process. It's not a free-for all, it has a dictator, and its name is the kernel.

Kernel as in the center piece.

Question to class: How many people have heard of the term Microkernel? Not many hands.

There are various terms that modify the term kernel such as monolithic kernel, microkernel, picokernel, etc. These specify how much stuff is in the microkernel. The idea is that the more code is in the kernel, the faster it goes, but conversely, that the more code there is the risk of crashing is higher.

All of the problem code goes into processes, as they can be restarted, and kept out of the kernel.

The debate about what is faster is not fully settled for technical and philosophical reasons. Almost all operating systems on the list above are big kernels, not small ones.

So if that's what a kernel is, how does a program fit into that? If there's one program to rule them all, where do processes fit in? The kernel decides who gets to run, that implements a priority scheme.

Student: "It got there first. You start the computer, then the kernel gets in. Everything has to talk to it or it doesn't run..."

It gets to set the rules... that's sort of it... In unix, there's the idea of the init process. It is first to run, and has special responsibilities. It is run using a regular binary, at system boot, by the kernel. This still doesn't tell us how the kernel keeps control of it.

The kernel often keeps control by getting the hardware to help. By loading first, the kernel can setup the CPU and memory so that it has control. This type of hardware assistance is generally available to the first code to request it.


Interrupts -- what are they? It's an alert to say something has to be done now.

A CPU is running the programs, until something happens, like someone pressing a key or a network packet arrives. So an I/O device flags an interrupt. The CPU now has to stop and pay attention

An interrupt is just a mechanism to allow the CPU to change contexts, to switch from running one bit of code to another. There's a standard set of interrupts defined by the hardware. Associated with each interrupt there's a bit of code. When one interrupts happens, run its code, when another happens, run another. For example, for the keyboard, there's a routine to read a key from the keyboard, store it in a buffer so its not overwritten when the next key is pressed, then returns.

Stored program architecture 2.png

Think of an interrupt as a little kid pulling at your pant leg. It wants your attention now.

The OS controls interrupts to control the CPU (and also what happens with RAM).

Wait a second? If the kernel can only control interrupts, how can it keep general control if no interrupts happen? The clock IO device! It throws interrupts too.

As a part of the boot sequence, the kernel programs the clock to wake the operating system up every, say, 100th of a second. Call me! So the OS can then keep running and perform its tasks as it needs: "Is everyone behaving nicely? Do I need to kill anyone?"

Virtual Memory

A slight fly in the ointment: If you are a program, and want to take control, how do you mount a rebellion? You overwrite the interrupt table! This is where protected memory comes in. It prevents a regular program from doing this. As a regular process, you often can't even see the interrupt table.

How is that possible? Many schemes have been proposed for doing protected memory. Some variants will be spoken about, but the most widespread method is something known as virtual memory. Often tied into the concept of virtual memory is the ability to use disk for memory too.

The fundamental idea is that the address you think your instruction or variable is at in memory is fictional/virtual. Say you want to load from address 2000 and load it into a register, and you have another program that want to do the same thing, are they doing the same thing? Nope! They have nothing to do with each other in a virtual memory model. Both programs live in their own virtual worlds, and can't see each other. The kernel, with the help of a little piece of hardware called the MMU is able to give each process its own virtual view of memory. It decides how that's going to map to real memory as it sees it.

So the kernel controls interrupts to control IO and it controls memory. These are the two key controls. If a kernel can't control these, it can't properly provide protections ("It can't stop the rebellion").

Last class we talked about hypervisors. The whole idea there is that the kernel thinks that it has control of the interrupt table and the real MMU are actually virtual ones, provided by the Hypervisor.

So you can now run windows inside a window on Linux, OSX, etc.

The difference between the various versions of windows:

- Windows 95, Windows 98 implemented these ideas for some programs, but not all, and could get around it easily. - Windows 3.1 didn't have these. - Windows XP, Vista are modern.

There's one small problem with Vista and XP, however. This has to do with the nature of the processes:

Processes and kernel.png

To upgrade things, the kernel trusts some programs/users to allow them to upgrade. In windows, you tend to run as the user admin. This means you're running the equivalent of the unix root command. The kernel listens to you and does just about anything you want, including install programs.

Say that cute Christmas animation.. which happens to install a keylogger to send all your keystrokes to the other side of the world, so they can log into your bank account.

In unix, there's the concept of root users and non-root users. Root can ask for almost anything to be done, including change its code. If you can tell the kernel to load new code, you can pretty much do anything. As an unpriviledged user, the kernel/OS say no.

When people make fun of windows being insecure, its not a fundamental flaw with the design of windows -- its a little broken, overly complex in some ways -- but certain design choices along the way in the name of usability, such as running as admin users so that users don't need to be asked to do something special to change settings, install software, upgrade, etc. This is why we have the current spyware problem.

Vista changes this slightly with the UAC (User Access Control) which runs a regular user with full priviledges, but asks you whenever priviledged operations need doing -- Yes/No. And you just click on it. But users still click yes.

And now there's easy ways to turn off UAC completely. We'll talk about this more later when we talk about security.

System Calls

How do you talk to a kernel?

It's the dictator and you're a supplicant. How do you make a timely request to the kernel to ask it to please do something? System calls!

A system call is a standard mechanism for an application to talk to a kernel.

A system call is NOT a function call. In your APIs and the like, it may look like a function call, may be wrapped in one.. But in implementation, they are very different.

In order for the kernel to be in control, it has to run with special privileges and not give these to the user programs. There are various schemes, but the common one is a 1-bit option: User mode, or supervisor mode. User mode means that running as a regular program, you can't talk to the IO/interrupt vectors or talk tot he MMU, but you can run instructions and access your own memory. When you switch to supervisor mode, then everything is accessible. The kernel runs in supervisor mode.

So if you're cut off and can't see the kernel, how do you send it a message? You might be able to write to a special place in memory that the kernel might check periodically, but how do you get it to check now? Normally the kernel is invoked by interrupts.. So as a user program, to invoke the kernel, you call an interrupt. There are special instructions, software interrupts, that are like a hardware interrupt, but software initiates them. There are interrupt tables just like for hardware.

So the kernel can then look at the memory of the invoking user program when a user program calls the system call. Remember, because of the memory protections, you can't just jump into kernel code, so the only way in is via an interrupt.

Therefore, system calls cause interrupts to invoke the kernel.

In the process of doing a system call, the system has to do a lot of 'paperwork' to change context. System calls are expensive, very expensive. This is one of the things that tends to bound the performance of an operating system.

Modern CPUs are so fast, shouldn't they be able to switch really fast? Turns out the tricks used to make modern CPUs really fast are like those used to make muscle cars -- they tend to go really fast in a straight line, but when you want to turn, you have to slow down to nearly a stop. Modern CPUs are like that.

Interrupts cause all partial work done in parallel by modern CPUs to be thrown out, such as 10-20 or more instructions. The CPU has to fill the pipelines and resume. This stuff happens at a level below that of what the kernel runs at. The kernel saves its registers before switching context, so that it can resume later.