Operating Systems 2015F Lecture 11

From Soma-notes
Jump to navigation Jump to search

Video

The video for the lecture given on October 14, 2015 is now available.

Notes

Lecture 11
----------

 - ftrace
 - virtual memory

Ftrace
------
 - as root, go into /sys/kernel/debug
 - enable events that you want by doing an echo "1" to the "enable" file for the event you want, e.g. events/syscalls/sys_enter_chdir/enable
 - look at the events in trace or, to just get the most recent ones, attach to trace_pipe
 - be sure to do an echo "0" to the enable file when you are done!

For a tutorial on ftrace, see lwn.net:
  https://lwn.net/Articles/365835/  (Part 1)
  https://lwn.net/Articles/366796/  (Part 2)


Virtual memory
--------------
 - remember the kernel is LAZY
   - defers I/O for as long as possible, in general
 - "mapped" files are only loaded into memory logically at the time of the mmap
 - file I/O only happens when assocatied memory is accessed (read or write)
   - loads in blocks of course
   - data is loaded "on demand"

 - kernel is also lazy about memory allocation
 - consider fork
   - logically, copies a process's entire address space
   - often, though, that address space is quickly discarded
     (when you do an execve)
   - solution: only copy data *as needed*
     - called "copy on write", or COW

 - kernel memory management is about using RAM to maximum effect
   - for files that are being currently accessed
     - only the part of those files that are being accessed
   - only code and data of processes that is actually being used

 - why is this hard
   - what does "current" actually mean?
   - when is stuff used recently not going to be used again?
     - past predicts the future, but not always

 - computer memory is designed assuming locality of various kinds
   - temporal locality
      - access in the past means likely access in the future
   - spatial locality (closer together data is more likely to be
                       accessed at the same time)

   - code and data exhibit different locality patterns
     e.g. video streaming
       - run the same code over and over again
       - data is accessed once then discarded

   - to take advantage of locality patterns, we have a memory hierarchy
     - different types of memory, with different characteristics:
       - volatility
       - size
       - latency of access
       - bandwidth

   - most obvious: disk and RAM
     - disk is slow, but durable
     - RAM is fast but ephemeral

   - latency versus bandwidth
     - in general, you want low latency and high bandwidth
       - latency: time to get first byte of data
       - bandwidth: rate of data transfer over time
     - you can easily increase bandwidth by increasing parallelism
     - you have to design for latency

   - but real hierarchy is deeper
     - CPU registers
     - TLB, translation lookaside buffer
        - cache of virtual to physical memory mappings
	- it is an associative array implemented in hardware
     - L1 cache (smallest, lowest latency)
     - L2 cache (larger, moderate latency)
     - L3 cache (even larger, higher latency, but still faster than DRAM)
     - DRAM (pretty big, high latency, high bandwidth)
     - Solid state disks (flash memory)
     - Hard disks (spinning)
     - Tape

   - virtual versus physical addresses
     - programs access memory in terms of virtual addresses
     - storage happens with physical addresses
     - virtual to physical address translation happens
       ON EVERY MEMORY ACCESS