Operating Systems 2019F Lecture 8

From Soma-notes
Revision as of 21:33, 27 September 2019 by Soma (talk | contribs) (Created page with "==Video== Video from the lecture given on September 27, 2019 [https://homeostasis.scs.carleton.ca/~soma/os-2019f/lectures/comp3000-2019f-lec08-20190927.m4v is now available]....")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Video

Video from the lecture given on September 27, 2019 is now available.

Notes

Lecture 8
---------

In tutorial 3, you saw mmap.  To explain this, we must discuss...

virtual memory!

On every memory access, the CPU must
  get virtual address
  map virtual address to a physical address
  read memory (in cache or main memory) from physical address

this mapping is managed by the MMU (memory management unit)

No MMU, no virtual addresses

To really make things fast, we have the fastest cache in the CPU
 - the TLB (translation lookaside buffer)
 - caches virtual->physical memory mappings

because mappings have to be stored in RAM somewhere..the TLB caches them

What is a cache?

memory hierarchy
-----------------

wide range of storage:
  * fast and small to large and slow

VOLATILE

CPU registers
TLB
level 1 cache
level 2 cache
level 3 cache
main memory (RAM)

---
NON VOLATILE

solid state disk
spinning hard drive
tape drive

memory hierarchy works as long as we can have high temporal and spatial locality

spatial:
 - accessing one byte means we'll access nearby bytes

temporal:
 - accessing one byte means we'll access the same byte nearby in time


Hardware manages level 1-3 caches, TLB (generally)
compiler manages CPU registers
OS kernel manages main memory and disk


What would be the optimal algorithm for managing main memory?
 - predict future
 - throw out data that won't be used, load data that will be

In practice, we mostly use LRU (least recently used)
 - if you haven't accessed it recently you probably won't in the future

Memory is managed in fixed-sized chunks always
 - reduces overhead

Typically 4K or 8K, but can be multiple megabytes

If it is on disk, it is a block
If it is in RAM, it is a page

Page table
 - data structure that maps virtual to physical memory,
   at page-level granularity

each process has its own page table

CPU (the MMU part) walks the page table to map virtual to physical addresses
  - and caches results in the TLB

pages (virtual pages) are stored in frames (physical memory)

when the CPU finds nothing corresponding to a virtual address, the OS gets called