Operating Systems 2022F Lecture 19

From Soma-notes
Revision as of 00:08, 23 November 2022 by Soma (talk | contribs) (Created page with "==Video== Video from the lecture given on November 22, 2022 is now available: * [https://homeostasis.scs.carleton.ca/~soma/os-2022f/lectures/comp3000-2022f-lec19-20221122.m4v video] * [https://homeostasis.scs.carleton.ca/~soma/os-2022f/lectures/comp3000-2022f-lec19-20221122.cc.vtt auto-generated captions] Video is also available through Brightspace (Resources->Zoom meeting->Cloud Recordings tab) ==Notes== <pre> Lecture 19 ---------- - No class Thursday (US Thanksgivi...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Video

Video from the lecture given on November 22, 2022 is now available:

Video is also available through Brightspace (Resources->Zoom meeting->Cloud Recordings tab)

Notes

Lecture 19
----------
 - No class Thursday (US Thanksgiving)
 - A3 & A4 updates

Topics for today
 - virtual memory
 - task & scheduling

A3 was still officially due last night before midnight
 - but I'm now allowing submissions through Thursday
 - so solutions will be posted on Friday

So also more time for T5 & T6
 - but really you should be working on T8 and T9 now

A4 status
 - code is finished & posted
 - just updating the questions

Tasks in the Linux kernel
 - tasks are what the kernel schedules to run on a core
 - associated with a task is CPU state (registers) and an address space
    - note the address space may be shared with other tasks
 - so processes and kernel threads both are "tasks" in the kernel

Tasks are represented by "struct task_struct"
And when the kernel is processing a system call, the task that made the system call is referred to as "current"

Remember that kernel code is running in three contexts
 - on behalf of a task, processing a system call
 - on behalf of a device, processing an interrupt
 - in a kernel tasks, doing background processing for some reason
    - many drivers have kernel threads so they can do work whenever they need to
    - seems that interrupts are handled by specific kernel tasks,
      for some drivers at least
 - kernel tasks are identified as being in [] in a ps listing
 - don't even bother trying to send signals to these, the kernel will ignore you
   (I'm pretty sure, might be fun to try)

If you look in any process in /proc, there is /task subdirectory that shows all of the kernel-schedulable threads associated with this process
 - really, /proc shows you a list of tasks, not processes/threads

So what really is a process then?
 - it is a group of tasks
 - there are various ways they can be associated with each other, but
   if you look in /task and see a bunch of things, those all together are the process, with one of them potentially being the "lead task"
 - a single-threaded process will have only one directory in /proc/<PID>/task
 - we're not going to discuss this further

Let's talk about address spaces
 - abstraction around virtual addresses

What does this mean?
 - when we work with memory addresses in assembly in UNIX, we are almost always using virtual addresess
 - how do we know?  because different processes can use the *same* addresses with no issues

Emacs accessing address 0x12345678 and Firefox accessing 0x12345678 are COMPLETELY DIFFERENT THINGS

But, the CPU needs a single way to access all of the RAM that it has
This numbering is known as physical addresses

Now, do we NEED virtual addresses?  No!
 - older computers and smaller, embeded ones often don't have them
 - instead, we can just split up memory between programs:
    - one will get 20000-40000, another will get 40000-50000, and so on
    - but man, that means a program can't ever count on getting access
      to any specific range of memory - makes life complicated for
      programmers
        - especially if one program misbehaves, memory protection is nice!
    - so hardware people gave us hardware to make using virtual addresses fast

If you want to see what life is like without virtual memory, look up TSR's for MS-DOS...or get used to just running one program at a time, having to reboot between programs.

Virtual memory is implemented with a combination of hardware and software
 - hardware: TLB, page tables lookup
 - software: operating system kernel managing page tables

Page tables are the data structure that defines virtual->physical memory
mappings for every task (process)

So logically, what happens *on every memory access*
 - program references virtual address v
 - system looks in the page table for v, finds the corresponding physical address p
 - it then looks in physical memory at what is in p
 - then it does a read or write to p as needed

So this means that every virtual memory access requires
 - looking things up in the page table, THEN
 - accessing physical memory to do a read/write

The page table is ALSO in RAM, so we're doubling (or more) the amount of time needed on every memory access.
  - the page table is actually a big tree, so may need more than one memory access to look up a value (we have to "walk the tree")


So there has to be a way to speed this up!  That's the TLB.
 - the TLB is a cache of recently accessed virtual->physical memory mappings
 - it is SUPER FAST (basically the fastest memory in the system)
 - it is also not that large

The TLB is implemented differently from other caches because it is more like a hash table
 - you query it with a virtual address, it then checks ALL of its entries
   to see if any match and then returns the corresponding physical address

But really, these lookups aren't happening on a per-byte level, but are at a per-page level.

What is a page?
 - a block but for RAM (disks have blocks, RAM has pages)
 - a page is generally 4K but can be much larger (4M)
   - larger pages are used for specialized purposes normally,
     mapping device memory for example

So the page table is mapping virtual to physical *pages*
  - reduces the number of bits that have to be mapped

Think of every address as having a prefix plus an offset
 - prefix is mapped between virtual and physical
 - offset doesn't change, it is just an index into the page
   (you can think of it as a 4K or 8K array of bytes indexed by the offset)

Page tables are interesting as a data structure
 - because we want to make sure parts of it *also* fit in pages, and so
   should do so most efficiently
 - much as a filesystem data structures all fit into multiples of blocks

Filesystems are optimized for persistence, robustness, while data in RAM is optimized for performance

Virtual memory is the set of technologies that allows each process to have its own address space

One nice feature of pages and blocks being basically the same size (or multiples of each other) is we can easily move data between RAM and disk
 - when we run out of RAM we can just store process data on disk
 - this is what a "page file" or "swap file" is used for
    - swap is an older term for when an entire process would be put on disk,
      rather than specific pages
    - nowadays only portions of processes are kicked out of RAM and onto disk

Page tables are data structures for organizing page table entries (PTEs)
  - a PTE identifies a specific page and associated metadata (valid or not, read/write/execute permissions, etc)
  - the TLB uses PTEs to handle requests to map a virtual address prefix to a physical address prefix
  - when the TLB doesn't have a suitable PTE, then the CPU goes and checks the designated page table for that process

So as memory has gotten bigger, page tables have gotten deeper
For 32-bit address spaces it was pretty simple
 - a PTE is 4 bytes, a page is 4K
 - so one page can hold 1024 PTE's
 - an offset for a byte in 4K requires 12 bits (2^12 = 4096)

So addresses were split into three fields:
  10 bits for index in level 1 of the page table (the root of the tree)
  10 bits for index in level 2 of the page table (middle nodes)
  12 bits for offset in the page
 ----
  32 bits

Much messier with 64 bits, hence the number of levels handled in the Linux kernel currently!

Note that the page table is a tree, but rather than having 2 pointers in each node, we have 1024 typically.

I showed you some code from the Linux kernel to make things concrete,
but I'm not expecting you to really understand it
 - I only understand bits and pieces!

What I *do* want you to understand are the high-level concepts and to
see how those concepts are reflected in what you can observe on a real Linux system.