Operating Systems 2019F Lecture 23

From Soma-notes
Revision as of 02:11, 20 March 2020 by Soma (talk | contribs) (Created page with "==Video== The video for the lecture given on November 29, 2019 [https://homeostasis.scs.carleton.ca/~soma/os-2019f/lectures/comp3000-2019f-lec23-20191129.m4v is now available...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Video

The video for the lecture given on November 29, 2019 is now available.

Notes

Lecture 23
----------
lecture will start at 4:05

Review session (Q&A)
 * December 13, 3 PM?
 * will reserve room and annouce if time is okay
 * will not be recorded, but I will be available for questions on discord

Last class, Dec. 4th: discussing Assignment 4 solutions, topics for exam
 - the quiz will come out on Monday, Dec. 2nd by 4 PM on cuLearn
 - you'll have 48 hours to do the quiz
 - will be multiple choice

Final exam is on Dec. 17th, 7 PM
 - AH, rows 12-23 (Alumni Hall)
 - 2 hours to do the final

Please do course evaluations



Security in Linux, and my research

How do you defend yourself?

What is your threat model?
 - strong encryption doesn't prevent a sledgehammer attack
 - cryptography is not security, but it can be useful when trying to achieve
   security

Malicious mobile software & operating systems

Morris Worm
 - computer worm (program that can move between systems on its own,
   often malicious but not necessarily)
 - propagated across most of the then internet (in 1988)
 - was supposed to be a benign experiment, but crashed a huge number of
   systems

Used multiple exploits
 - e.g., sendmail, finger - common services UNIX systems used to run
   - sendmail was for receiving email
   - finger answered queries about what users were active on a system,
     e.g. finger soma@homeostasis.scs.carleton.ca would tell you whether
     I was logged in, what my phone number was, and what my "plan" was
 - these programs were listening for connections, and the Morris worm
   compromised them and used them to run its own code

How can you take over a program by sending it data?
 - code injection exploits
 - classic form: buffer overflow attacks

Basic idea of a buffer overflow
 - give a program an input that is too large for a buffer that is on the stack
 - program is stupid, writes past end of buffer
 - in writing past the end of the buffer, it overwrites
   function return addresses
 - attacker sets the return address to one where they have stored
   their own machine code (in the buffer that was overflowed)
 - this machine code would do an execve of a shell or something else
   (hence "shellcode")

To learn more about classic buffer overflows, see http://phrack.org/issues/49/14.html (Stack Smashing for Fun and Profit)

But you should know...classic buffer overflow attacks don't work anymore

Because operating systems changed
 - they got built-in protections

The real advantage of Rust is it makes memory corruption much easier to avoid

But we have other mitigations

Remember when we did 3000memview in Tutorial 3?  The addresses kept changing?
 - this was to make buffer overflow attacks break
 - because if you overwrite the return address, you better know where to
   jump - and if you don't, you'll just crash the process

Another change is non-executable memory
 - attacker injects code into a buffer on the stack
 - on modern systems, this memory is not marked executable
 - so if an attacker tries to run code there, it won't run

Interestingly enough, x86 processors didn't really allow you to do this
on a per-page basis
 - read/execute for a page was one bit, if you had one you had the other
 - required the processor to change to give support

A lot of the changes nowadays to operating systems and CPUs at a low
level are to add security features or to remove old features that
caused security problems

So classic buffer overflow attacks were broken by randomized memory layouts and no-execute memory...so what did attackers do?
 - they created new memory corruption attacks (e.g. return-to-libc, return
   oriented programming)
 - they adapted

Computer security is stuck in a vicious arms race
 - defenders come up with defenses, attackers find ways around them
 - this is not inevitable

If your defenses are very specific, they invite attackers to circumvent them

My view: accept that systems are imperfect.  There will be ways to
exploit them.  How do we deal?

Does any other system have "no" vulnerabilities?
 - not people, not tanks!

Key differences
 - do you know when you've been hurt or at risk?
 - can you react meaningfully?

Future OS-level security defenses should accept that the computer will be
vulnerable, and should act accordingly

Classically, this framing of the problem has fallen under
anomaly-based intrusion detection

Most intrusion detection is either
 - signature based (patterns of attacks, blacklist)
 - specification based (patterns of legal behavior, whitelist)

These aren't tenable strategies for security, because we can never have
perfect whitelists or blacklists

What people do is assess risk based on past activity
 - can be very complex, but at base it is just "is this what I expect"
 - consider a guard in a large building
   - they get to know who belongs or not

How could an operating system assess risk?
 - it should take cues from simpler living systems, not just people
 - can it tell when things have changed?  Can it then react?

eBPF is really exciting to me because it allows for many potential ways
to assess risk
 - you can introspect on almost any part of the system

I've built systems like this (see my dissertation)

Right now, though, they aren't suitable for general use because of a
simple problem
 - we don't know what to do with systems that really monitor themselves or us
 - the "clippy" problem


security as a social problem, not a pure technical problem
 - involves providing value beyond protection