EvoSec 2025W Lecture 3

From Soma-notes
Revision as of 16:48, 15 January 2025 by Soma (talk | contribs) (Created page with "<pre> Lecture 3 --------- Perspectives on Trust G1 - waking up - do you trust that nothing bad will happen, or you just get up because you have to? - we can decouple, but can machines decouple trust from action? - continuous vs discrete trust - "levels of trust" - how does that affect actions G2 - game theory - prisoner's dilemma, agents are adversaries? where is the sociality of trust - not a full view - trust as black and white vs probability G3 -...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Lecture 3
---------

Perspectives on Trust

G1
 - waking up - do you trust that nothing bad will happen, or you just get up because you have to?
   - we can decouple, but can machines decouple trust from action?
 - continuous vs discrete trust
   - "levels of trust" - how does that affect actions

G2
 - game theory
   - prisoner's dilemma, agents are adversaries? where is the sociality of trust
   - not a full view
 - trust as black and white vs probability

G3
 - trust from creators to machines we create
 - humans have a sense of self preservation, that's where trust comes from
   - computers don't have that do they?

G4
 - humans develop trust over time
 - computers make trust decisions instantly
 - humans & computers value different things for trust
   - IP address vs image/voice in video call


Why did I assign these readings?
 - see the variety
 - note the lack of coherent purpose

game theory, prisoner's dilemma
 - cooperation is a bad idea
   - unless it is an iterated game
     - classic strategy, tit-for-tat
     - best strategy is tit-for-tat with revenge (for preserving cooperation)

 - but what if you already trust each other, why would you defect?

Analysis vs synthesis
 - computer security is on the synthesis side

Why do computer systems make trust decisions
 - so fast
 - without reference to past experience

 - too much work/effort/computation/storage?
 - user makes the trust decision?

Traditionally, computers haven't been empowered to make trust decisions
 - instead, they enforce trust relationships decided by people
 - i.e. people create policy, computers enforce policy
   - in those policies, the trust associated to entities is NOT determined by
     the computer

Why?
 - computers aren't autonomous

Only autonomous systems can really be trusted
 - because they are empowered to say no

What secrets will a computer never reveal?
 - secret keys is TPMs and similar devices