File Management

From Soma-notes
Revision as of 02:04, 9 November 2007 by Polina Vinogradova (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Pipelining

Pipelining is a strategy that allows even single-core CPUs to execute multiple instructions in parallel.

For example The following list of instructions is to be executed: Load; Compare; Store; Branch;

For each of the instructions, the following have to be performed: Fetch; Decode; Get operands; Do operation; Store result;

Each of these operations may be done by different pieces of silicon. So, while load is decoded, compare may be fetched, etc.

How does the CPU know what this code is going to do next? It may execute likely instructions, or all possible next steps. This is known as speculative execution. The 'likely' instructions are ones that, for example, have been executed multiple times previously (the CPU has counters to keep track of the number of executions).

What is the most efficient execution strategy? It is difficult to tell without actually running the instruction. This is known as Feedback Direct Optimization. For example, due to this type of optimization, Java code may actually execute faster than the C++ version.

Pipelining causes context switching to become more expensive because every time a context switch occurs, it causes the entire pipeline to be flushed, thus a lot of the work the processor has done is lost.


Startup to Shutdown

Startup


1. BIOS is loaded. BIOS used to be ROM. Now, it is just Flash, and is modifiable.

2. Device initialization happens. For example, set all the bits in RAM to 0. All the hardware is put into sane state.

How does the BIOS know something is bootable? It follows the boot order. It starts with loading the first block.

3. Code is loaded from the boot device

4. This boot block loads the boot loader. Here is where the switch is made to protected mode.

5. The boot loader loads the kernel


After this is done, the kernel repeats most of the same process. For example, it initializes the hardware again, since the BIOS may not have initialized it all properly.

In the kernel's initialization:


- Devices, CPU and RAM are enumerated and initialized

- Interrupt tables are set up

- Mount the file system: the rest of the code the kernel has to run in stored in a file system. The first file system that is loaded is the fake or initial root file system. This may be the RD (RAM disk) that the boot loader loaded into memory. This first fs is necessary for the kernel to access other code and device drivers. Once, that is done, the real fs can be loaded.

What to do if a fs is not loaded? The kernel panics. We will talk more about this later.

- Switch control to userland. Start the process /sbin/init running.


It is important to separate mechanism from policy: policy is accessible in user space and easier to deal with.

On Unix machines, the init process does not do much itself because it has processes re-parented to it. So if init dies, the whole system goes down.


- The init process creates new processes (which is totally configurable), and has to load other binaries. So, the init runs some scripts that start processes running (newer init scripts do this in parallel). The ones to run on startup and shutdown are indicated with the first letter in the name of the process.

- What to do next? Schedule these processes. More on scheduling below. The info on the execution context (CPU state) is in the kernel structure task_struct.

At this stage some processes are listed that do not match any executables on disk. This is done to allow the kernel to perform some processes asynchronously.


Note:

The CPU's supervisor modes is a hardware feature

The 'root user' is a software feature (this feature is needed so that the kernel knows which processes to listen to)

- Next, the login environment is created, and the user's privileges are set up


Shutdown


- To switch from user back to supervisor mode, a software interrupt is generated

- Shutdown scripts are run. These send signals to processes to terminate. Specifically, the init process, which has a special handler for this signal.

- The init process calls exit()

- The kernel halts the system


Signals, Interrupts and System Calls

Signals send messages between processes. These are sent by invoking the kernel, and thus the proper signal handler of the other process is invoked. All programs (processes) have signal handlers. It is possible to force them to ignore most signals, but not the kill signal.

Interrupts cause the kernel to go into supervisor mode and listen to the interrupt itself.

System calls are special instructions which cause a software interrupt.


Device Drivers

A device driver is a special piece of code to tell the kernel how to communicate with a device.

In Windows, if a device drive is not installed the first time the device is connected, the subsequent times the device will simply be ignored.

There are generic and specific device drivers. The specific device drivers (for a particular model of the device) make best use of the hardware. For example, they may provide some diagnostic features. The generic device drivers can emulate the behavior of a variety of device drivers. They also can run in legacy mode - as if the device they are talking to is more primitive than what is actually connected.


Scheduling

The simplest scheduling strategy is Round-Robin scheduling. This one allocates the same amount of CPU time to each process. It is a little 'too fair', which actually makes it biast against I/O intensive processes: it waits for the I/O to be completed, but it has already been completed. Thus the disk is idle for periods of time.

Scheduling should balance I/O and CPU intensive processes. Certain processes should have access to the resources they need while CPU processes are running in the background.

Processes can be assigned priorities. Some processes with high priority may even have a fixed fraction of the resources allocated to them. An example, in real-time systems certain processes have to wake up once per second.

Traditional scheduling may not be sufficiently precise to be acceptable in real-time systems. Things like non-timer interrupts may push back the time scheduled for a particular process.

There are hard real-time systems and soft ones. A 'hard' RT system requires the schedule to be exactly on time, or the value of the output of the process is significantly diminished, and 'soft' is more forgiving.

Scheduling and changing from process to process is pure overhead. And this overhead time is accumulated at least at each timer interrupt. Modern scheduling techniques lean towards allocating constant time to each process because of the huge number of processes running simultaneously on the system.


Registry

The registry is a database that is used to store information about installed programs, for example, the user's preferred language. Most of this information is cached in memory.

The same information is stored in . files or files in /etc on Unix systems.

Some advantages to having a registry:

- Storage efficiency

- Speed

- Provides a uniform API for file configurations

- Serves as a form of copy protection

In addition, it can be used by programs to hide information. For example, if this program has previously been installed on the system.