Operating Systems 2018F Lecture 14

From Soma-notes
Revision as of 19:59, 31 October 2018 by Soma (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Video

Video from the lecture given on October 31, 2018 is now available.

Notes

In Class

Lecture 14: Memory management
-----------------------------

Pointers
 - variables that refer to memory addresses
 - size of pointer: range of address
 - if you have more memory than can be indexed by your pointer, you have to
   "bank switch"
   - >64K on a 16 bit addresses (2 bytes)
   - >4G on 32 bt addresses (4 bytes)
   - same pointer has different meaning depending on "bank" being used
   
- 64-bit systems have 8 byte pointers, that is *plenty* large


- how do we allocate memory for pointers to point to?
  - which pointers will be valid?


- on older computers, programs would just use whatever memory was there
  - each program took care of its own allocation
  - ran one program at a time (in control of the whole machine)
  - to switch programs, you'd reboot

- but today, we want to run many, many programs on one system
  - we run them as processes
  - pointers in processes aren't referring to real memory
    "virtual addresses"

Why can't we all just get along?
 - one address space, multiple programs?

Problems
 - who gets which memory range?
 - shared access?  Protections?
 - and what about systems with different amounts of RAM?  Memory layouts?

Possible solutions
 - position-independent code
 - segments
 - load-time linking and relocating

Segments
 - memory "block" with a base address and a length
 - has some meaning associated with it
 - pointers can be relative to base segment address
   - segment registers that are added to segment pointers

Segment register: 5000
Pointer: 2000
Real address: 7000

x86 before the 80386 was very segmented
 - pointers were effectively a segment + offset

Segments are bad because of fragmentation

Fragmentation arises when you need to allocate variable-sized chunks of memory

Two kinds: internal vs. external

Assume we allocate precisely

Request     Response
4052        4052
7112        7112

External fragmentation:
..........
111.......
11122222..
...22222..
**move**
22222.....

Request     Response
4052        4096
7112        8192
10          4096




Virtual memory
 - fixed size allocations (pages)
 - appear to be contiguous to process (virtual addresses)
 - but are fragmented in real memory (physical addresses)

Page sizes are system-dependent, but are typically 4 or 8K
 (along with large pages)

So, to go from virtual to physical addresses, have to look up the mapping between them...on every memory access

Virtual      Physical
-------      --------
5125         7152
1122         2234

Hash table?
 - not so good for hardware implementation

Would like to do direct indexing
  - like an array index

But need to compress
  - map pages, not addresses

Every address becomes like a segment
  virtual page address + offset =>
  real page address + offset

offset is lower bits
 - for 4K pages, lower 12 bits

for 32 bit addresses, this means I only need to map
  20 bits -> 20 bits
(copying the lower 12 bits)

But we don't want to use an array to map the 20 bits
 - too big
 - too sparse (most virtual addresses are invalid)

Page table is what is used
 - but it really is a very wide tree

2-level page table

Master page:
 4K holds 1K pointers
 can specify offset using 10 bits
 points to secondary pages

Secondary pages
 4K holds 1K pointers
 specify offset using 10 bits
 points to data pages

Data pages
 use offset to find value of pointer

Split virtual address as follows:
10 bits + 10 bits + 12 bits
 master    sec.      offset


each secondary page gives you 4M of allocation

"pointers" in page tables (master & secondary) are called
Page Table Entries (PTEs)
 - upper bits point to a physical page
 - lower bits are metadata
    - e.g, is this valid

Cache of PTEs is called the TLB (translation lookaside buffer)
 - why?  blame IBM