Operating Systems 2021F Lecture 17

From Soma-notes
Revision as of 22:15, 16 November 2021 by Soma (talk | contribs) (Created page with "==Video== Video from the lecture given on November 16, 2021 is now available: * [https://homeostasis.scs.carleton.ca/~soma/os-2021f/lectures/comp3000-2021f-lec17-20211116.m4v...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Video

Video from the lecture given on November 16, 2021 is now available:

Video is also available through Brightspace (Resources->Class zoom meetings->Cloud Recordings tab)

Notes

Lecture 17
----------
 - due dates
    - remember they are all in brightspace
       - when officially due
       - when last accepted
         - if you miss the officially due, you're behind
	   but we'll take work without penalty until last
	   accepted
	   
 - symbolic links
 - stat vs lstat
 - device files
 - kernel & userspace
 - kernel modules
 - A3
   - chroot
   - demos
 - T7
   - kernel modules

fsck needs full paths?
 - well, it expects to be working on devices,
   so it wants you to be precise
 - a mess up can be catastrophic (rescue a filesystem can
   also mean erasing everything)

Hard links vs symbolic links
 - hard link: name -> inode
 - symbolic link: name -> name

most files are hard links of some kind
 - they refer to some kind of inode

there is no delete file system call
 - we just unlink to remove hard links
 - inodes are reclaimed when their reference count is zero

Hard links are more reliable than symbolic links
 - doesn't matter if other hard links are deleted,
   if we have one it will refer to the right file contents
   (inode)

So why symbolic links?
 - well, hard links are confined to one filesystem,
   symbolic links aren't

You can't have broken hard links (unless the filesystem is corrupted)

Broken symbolic links are easy, just delete or move what is pointed to

You can have symbolic links to symbolic links, and they will normally be deferenced on open
 - up to a limit, I forget how many, varies from system to system, but at least 3 or 4.

Note you can only hard link to files you own
 - you can make symbolic links to any file or directory
    - permissions are determined when it is accessed


Device files are a kind of special file


In assignment 3, there are two new things relative to T5 and T6
 - busybox
 - chroot

busybox: swiss army knife of command line UNIX
   - small versions of common utilities in one binary
     (statically linked of course)
   - great for embedded systems

chroot: change where / is
 - runs a command with a new root directory
   - by default, runs your shell


Note that this is a simple container, like what docker and snap use, except
 - no real confinement, as we have full access to /dev and /proc
 - so we can see all processes and all files (by doing the
   appropriate mounts)

To get real confinement, you need namespaces (to limit access to devices and processes) and cgroups (to limit resources)
 - but even these aren't enough
 - to see what is needed, look up bpfbox and BPFContain
   (work of William Findlay, one of my students)

So note that the concept of / can vary between processes!
 - different processes can have different views of
   the filesystem hierarchy

the environment outside of a process, all that we access
via system calls, is very dynamic and changeable
 - it is whatever the kernel wants it to be

Code running inside a process is always limited, even if that process is running as root (euid=0)
 - can still segfault
 - must make system calls to access resources

the kernel is the code on the system running outside of processes
 - it *implements* the process abstraction

What code is in the kernel varies from OS to OS
 - some try to have as little as possible in it,
   those are "microkernels" or even "nanokernels"
    - they put everything they can into processes,
      rather than the kernel
 - but most mainstream OSs are "monolithic kernels"
    - lots of functionality inside the kernel


What goes inside a kernel then?
 - process abstraction
 - memory & CPU management
 - most device drivers
 - filesystems
 - networking
 - graphics APIs (some)

In microkernels, we put most of the above into processes
 - the memory and CPU management, process abstraction
   generally have to stay in the kernel

Modern monolithic kernels: Linux, FreeBSD, Windows, MacOS
  (although MacOS and Windows started off as microkernels)

Modern microkernels: QNX, L4, GNU Hurd

To really understand kernels, you have to understand code privileges on the CPU

Old (and embedded) CPUs will run code with one privilege level
 - all code can do anything
 - so an "operating system" here is just library code, say
   for accessing a disk drive

But once we wanted to do real networking, this was a BAD idea
 - one misbehaving program could crash everything
 - and somebody always had to listen and deal with
   the network, even when other things were going on

Basically, you need to have different privilege levels
to do concurrency properly
 - need to interrupt programs to do background tasks reliably

The more privileged code is what's running in the kernel,
less privileged is in processes

CPUs have
 - supervisor mode: for kernel code
 - user mode: for processes (even ones running as root)

(Actually, x86 processors have rings, with ring 0 for kernel code, ring 3 for user code.  Ring 1 and 2 are rarely used, came from a system called Multics.)

By the way, anti-cheat software generally wants to run in the kernel
 - supervisor mode, ring 0
 - so they can see EVERYTHING on your system (and change anything)

Device drivers generally run in the kernel,
but can run as processes (typically at a performance cost)
 - switching between user and supervisor mode is expensive

Why do we have to execute a special CPU instruction to make system calls?
 - because the CPU has to switch from user mode to supervisor mode