<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/index.php?action=history&amp;feed=atom&amp;title=EvoSec_2025W_Lecture_3</id>
	<title>EvoSec 2025W Lecture 3 - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/index.php?action=history&amp;feed=atom&amp;title=EvoSec_2025W_Lecture_3"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=EvoSec_2025W_Lecture_3&amp;action=history"/>
	<updated>2026-04-22T11:33:30Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=EvoSec_2025W_Lecture_3&amp;diff=24958&amp;oldid=prev</id>
		<title>Soma: Created page with &quot;&lt;pre&gt; Lecture 3 ---------  Perspectives on Trust  G1  - waking up - do you trust that nothing bad will happen, or you just get up because you have to?    - we can decouple, but can machines decouple trust from action?  - continuous vs discrete trust    - &quot;levels of trust&quot; - how does that affect actions  G2  - game theory    - prisoner&#039;s dilemma, agents are adversaries? where is the sociality of trust    - not a full view  - trust as black and white vs probability  G3  -...&quot;</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=EvoSec_2025W_Lecture_3&amp;diff=24958&amp;oldid=prev"/>
		<updated>2025-01-15T16:48:05Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;&amp;lt;pre&amp;gt; Lecture 3 ---------  Perspectives on Trust  G1  - waking up - do you trust that nothing bad will happen, or you just get up because you have to?    - we can decouple, but can machines decouple trust from action?  - continuous vs discrete trust    - &amp;quot;levels of trust&amp;quot; - how does that affect actions  G2  - game theory    - prisoner&amp;#039;s dilemma, agents are adversaries? where is the sociality of trust    - not a full view  - trust as black and white vs probability  G3  -...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
Lecture 3&lt;br /&gt;
---------&lt;br /&gt;
&lt;br /&gt;
Perspectives on Trust&lt;br /&gt;
&lt;br /&gt;
G1&lt;br /&gt;
 - waking up - do you trust that nothing bad will happen, or you just get up because you have to?&lt;br /&gt;
   - we can decouple, but can machines decouple trust from action?&lt;br /&gt;
 - continuous vs discrete trust&lt;br /&gt;
   - &amp;quot;levels of trust&amp;quot; - how does that affect actions&lt;br /&gt;
&lt;br /&gt;
G2&lt;br /&gt;
 - game theory&lt;br /&gt;
   - prisoner&amp;#039;s dilemma, agents are adversaries? where is the sociality of trust&lt;br /&gt;
   - not a full view&lt;br /&gt;
 - trust as black and white vs probability&lt;br /&gt;
&lt;br /&gt;
G3&lt;br /&gt;
 - trust from creators to machines we create&lt;br /&gt;
 - humans have a sense of self preservation, that&amp;#039;s where trust comes from&lt;br /&gt;
   - computers don&amp;#039;t have that do they?&lt;br /&gt;
&lt;br /&gt;
G4&lt;br /&gt;
 - humans develop trust over time&lt;br /&gt;
 - computers make trust decisions instantly&lt;br /&gt;
 - humans &amp;amp; computers value different things for trust&lt;br /&gt;
   - IP address vs image/voice in video call&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Why did I assign these readings?&lt;br /&gt;
 - see the variety&lt;br /&gt;
 - note the lack of coherent purpose&lt;br /&gt;
&lt;br /&gt;
game theory, prisoner&amp;#039;s dilemma&lt;br /&gt;
 - cooperation is a bad idea&lt;br /&gt;
   - unless it is an iterated game&lt;br /&gt;
     - classic strategy, tit-for-tat&lt;br /&gt;
     - best strategy is tit-for-tat with revenge (for preserving cooperation)&lt;br /&gt;
&lt;br /&gt;
 - but what if you already trust each other, why would you defect?&lt;br /&gt;
&lt;br /&gt;
Analysis vs synthesis&lt;br /&gt;
 - computer security is on the synthesis side&lt;br /&gt;
&lt;br /&gt;
Why do computer systems make trust decisions&lt;br /&gt;
 - so fast&lt;br /&gt;
 - without reference to past experience&lt;br /&gt;
&lt;br /&gt;
 - too much work/effort/computation/storage?&lt;br /&gt;
 - user makes the trust decision?&lt;br /&gt;
&lt;br /&gt;
Traditionally, computers haven&amp;#039;t been empowered to make trust decisions&lt;br /&gt;
 - instead, they enforce trust relationships decided by people&lt;br /&gt;
 - i.e. people create policy, computers enforce policy&lt;br /&gt;
   - in those policies, the trust associated to entities is NOT determined by&lt;br /&gt;
     the computer&lt;br /&gt;
&lt;br /&gt;
Why?&lt;br /&gt;
 - computers aren&amp;#039;t autonomous&lt;br /&gt;
&lt;br /&gt;
Only autonomous systems can really be trusted&lt;br /&gt;
 - because they are empowered to say no&lt;br /&gt;
&lt;br /&gt;
What secrets will a computer never reveal?&lt;br /&gt;
 - secret keys is TPMs and similar devices&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Soma</name></author>
	</entry>
</feed>