<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/index.php?action=history&amp;feed=atom&amp;title=Operating_Systems_2019W_Lecture_23</id>
	<title>Operating Systems 2019W Lecture 23 - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/index.php?action=history&amp;feed=atom&amp;title=Operating_Systems_2019W_Lecture_23"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2019W_Lecture_23&amp;action=history"/>
	<updated>2026-04-05T17:00:50Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2019W_Lecture_23&amp;diff=22326&amp;oldid=prev</id>
		<title>Soma: Created page with &quot;==Video==  The video for the lecture given on April 8, 2019 [https://homeostasis.scs.carleton.ca/~soma/os-2019w/lectures/comp3000-2019w-lec23-20190408.m4v is now available]....&quot;</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Operating_Systems_2019W_Lecture_23&amp;diff=22326&amp;oldid=prev"/>
		<updated>2019-04-09T00:19:03Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;==Video==  The video for the lecture given on April 8, 2019 [https://homeostasis.scs.carleton.ca/~soma/os-2019w/lectures/comp3000-2019w-lec23-20190408.m4v is now available]....&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;==Video==&lt;br /&gt;
&lt;br /&gt;
The video for the lecture given on April 8, 2019 [https://homeostasis.scs.carleton.ca/~soma/os-2019w/lectures/comp3000-2019w-lec23-20190408.m4v is now available].&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Lecture 23&lt;br /&gt;
----------&lt;br /&gt;
&lt;br /&gt;
Systems security&lt;br /&gt;
&lt;br /&gt;
 - not crypto(graphy)&lt;br /&gt;
&lt;br /&gt;
Cryptography is amazing technology, but it is very brittle&lt;br /&gt;
 - almost nothing is proved secure&lt;br /&gt;
 - implementation is very very tricky and it is very easy to make&lt;br /&gt;
   a mistake that undermines all security guarantees&lt;br /&gt;
&lt;br /&gt;
Debian openssl predictable PRNG flaw&lt;br /&gt;
 - someone was trying to get rid of valgrind warnings&lt;br /&gt;
 - the uninitialized memory was used to gather entropy from the OS...&lt;br /&gt;
&lt;br /&gt;
Can we make perfect software?&lt;br /&gt;
 - if you can prove it correct...maybe?&lt;br /&gt;
 - proofs can be flawed and can make false assumptions&lt;br /&gt;
&lt;br /&gt;
When we find vulnerabilities, we&amp;#039;re engaging in a process that never ends.&lt;br /&gt;
&lt;br /&gt;
But any vulnerability could undermine a whole system&amp;#039;s security&lt;br /&gt;
&lt;br /&gt;
On Linux, the &amp;quot;trusted computing base&amp;quot; (TCB) is&lt;br /&gt;
 - Linux kernel&lt;br /&gt;
 - bootloader&lt;br /&gt;
 - every process running as root (started at boot or setuid root)&lt;br /&gt;
 - some partially privileged processes/executables&lt;br /&gt;
&lt;br /&gt;
Security standard practice is that we want as small of a TCB as possible&lt;br /&gt;
 - but this isn&amp;#039;t practical, at least with current development practices&lt;br /&gt;
&lt;br /&gt;
How can we have security with flawed software?!&lt;br /&gt;
&lt;br /&gt;
I believe this is possible, because other systems are highly flawed yet&lt;br /&gt;
are reasonably secure: living systems&lt;br /&gt;
 - we all have a variety of imperfect defenses&lt;br /&gt;
 - but they work well enough to keep most of us alive most of the time&lt;br /&gt;
&lt;br /&gt;
Modern computer antivirus is like requiring a vaccine against every virus&lt;br /&gt;
&lt;br /&gt;
Living systems use a combination of&lt;br /&gt;
 - barriers &amp;amp; general defenses&lt;br /&gt;
 - adaptive defense&lt;br /&gt;
 - diversity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The adaptive immune system&lt;br /&gt;
 - notices damage &amp;amp; other strange behavior&lt;br /&gt;
 - tries many possible solutions&lt;br /&gt;
 - ramps up solutions that work&lt;br /&gt;
&lt;br /&gt;
Say I want a system to detect malicious system calls&lt;br /&gt;
 - if I do large-scale learning (monitor milions of systems, get their&lt;br /&gt;
   system calls, look for bad ones), I will face certain fundamental problems&lt;br /&gt;
 - you have to stay small if you want the accuracy to be usable&lt;br /&gt;
&lt;br /&gt;
My strategy&lt;br /&gt;
 - monitor locally&lt;br /&gt;
 - do stupid learning (table lookup)&lt;br /&gt;
 - don&amp;#039;t do anything too dangerous in response to detected problems&lt;br /&gt;
   (don&amp;#039;t kill processes or delete data)&lt;br /&gt;
&lt;br /&gt;
For example, slow down unusually behaving processes&lt;br /&gt;
&lt;br /&gt;
But the understandability problem is fundamental&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Soma</name></author>
	</entry>
</feed>