Operating Systems 2019F Lecture 20
Video
The video for the lecture given on November 20, 2019 is now available.
Notes
Lecture 20 ---------- Assignment 3 is now due Nov. 24th, not the 22nd operating system security What does it mean to keep an operating system secure? Conventionally, it means - prevent the operating system code from being compromised (controlled by an unauthorized party) - enforce system security policy (e.g., follow access contro policy) Security policy in practice is not that variable, and really we should be more specific in our operating system security requirements (and newer systems are more specific) TCB = trusted computing base portion of a system that enforces security policy as long as it stays secure, policy is enforced Classic OS design says we need a "minimal" TCB - so we can check it and make sure it doesn't have any bugs What is the TCB for UNIX/Linux? - not so clear... What do you need to "lock down" to make sure a Linux system stays secure? How can each be compromised? 1. kernel (running in supervisor mode on the CPU) - normally, hard to get to since userspace programs can't directly do anything to it, including code running as root - but, if you have module support, it is possible to load arbitrary code into the kernel - system calls may have coding flaws - CPU may have privilege separation flaws - defenses: signed kernel modules, vulnerability analysis of kernel, CPU firmware updates 2. common libraries (e.g. C library) - included in many programs, including ones running as root - if compromised, can do anything as the user running the program - defenses: signed libraries, vulnerability analysis of library 3. system daemons (background processes) - provide critical functions, run with high privilege - can sign, do vuln analysis 4. setuid binaries - provide critical functions, run with high privilege - can sign, do vuln analysis 5. common applications (web browsers) - everyone runs them! so can compromise arbitrary users - sign, vuln analysis TCB of a Linux system is almost everything installed - you can compromise someone even with ls or whoami Some systems limit the privileges of root, such as SELinux and AppArmor - known as mandatory access control (MAC) - means root can't change security policy - have to reboot to change policy, go into special mode - not so good for developer workstations, but may be good for embedded systems or servers classic Linux has discretionary access control (DAC), meaning root can change it at any time. What are systems that you use that are really "locked down" - you can't make them run arbitrary code, they can only run code authorized by the manufacturers. * iOS, Android devices * game consoles * many IoT devices * cars Most of this "locking down" happens through code signing, but it is very elaborate. - "trusted computing" - code is verified from first boot, and every subsequent program is also checked so you get a chain of trust from the bootloader to the application - necessary because any untrusted code in this chain can mess everything up - "locked bootloader" -> trusted computing, code signing When an iOS device is "jailbroken" it is made so arbitrary code can be run on it, not just code authorized by Apple. - but the term means something more Modern OS and application security is based around another key concept **sandboxing** A sandbox for code isolates the code from the rest of the system - limits its access - allows for running untrusted code safely BSD has application "jails" - processes in a jail can't escape, they can only access resources inside of the jail By default, on iOS all applications run inside jails. - this is why applications on iOS are limited in their functionality Android doesn't do jails, instead they have fine-grained application permissions - system applications just have more privileges - this approach is inherently more flexible and less secure Jails provide analogous functionality to the JavaScript sandbox for web pages, but the implementation is completely different - enforced by hardware, OS kernel rather than web browser, language runtime Language-level sandboxes are inherently safer than OS-level jails - see the restrictions on app stores versus the openness of the web WebAssembly may bridge this gap Rootkits -------- - code designed to hide attacker behavior from regular users and even administrators of a system - if a rootkit is installed, you can never trust the system because you can't trust anything it does - it will lie to you - you fix by reinstalling, but if they compromised firmware you may have to throw the computer out Key goals - hide processes - hide files - ensure access One aspect of a rootkit is a "back door" - an unauthorized way to gain access to a system - e.g. Joshua from WarGames - classic: special password that always work Rootkits don't compromise a system, they are used after a compromise. You can do a pure userspace rootkit - change ls, ps, other utilities - but it is a pain to truly hide things normally, need to run code as root to truly hide things - so you do it with a kernel module or similar mechanism - change how system calls work - hide files, processes normally - show hidden files & processes when given special arguments Not a rootkit, but gives you an idea: fakeroot