EvoSec 2025W Lecture 4

From Soma-notes
Revision as of 21:06, 16 January 2025 by Soma (talk | contribs) (→‎Notes)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Discussion Questions

  • What did you not understand in the readings? Specifically, what biological terms/concepts would you like to learn more about?
  • How applicable are these readings to computational systems, in your opinion?

Notes

(Sorry, no lecture recording for today.)

Lecture 4
---------

responses overall good

make sure to
 - make it clear you've done the readings
   (without summarizing)
 - point out what you didn't understand
 - say what you got out of them/what you didn't get out of them

G1
--
 - contrast between papers
   - Bateman, cooperation coming out of evolution
   - Michod, no thought but still cooperated
 - chicken & egg situation - evolution or trust?
 - wasp stripes copied by harmless insects
   - classifications aren't reliable when agents adapt
 - cheating - can it happen in computer systems? without intervention of people?

G2
--
 - Didn't quite understand free rider problem/cheating
   - how can you stop it?
 - torrenting as evolutionary
   - cooperating individuals sharing fragments
 - how fitness functions could manage cheating & encourage synergy

G3
--
 - Michod: how groups manage conflict/group fitness
   - groups managing conflict through cooperation
   - adaptation to help address conflicts
     - may favor group over individual
 - groups allow work to be split up, can add robustness
 - Bateson: inter vs intra-species cooperation
    - may not always be obvious,
      e.g. plants cooperating with animals, oxygen & carbon dioxide
   
G4
--
 - Michod had some confusing biology terms
 - altruism doesn't make sense in a computational context
   - don't want a system to have to be hacked before we can defend against it
   - hacks affect groups not individuals
 - in a group setting, can take advantage of individual strengths, account for weaknesses
   - but with computers, such "helping" each other has to be set up in advance
     externally
 - with computers, nobody wants to be the first victim

G5
--
 - both papers are about the challenge of cheaters in the context of cooperation
 - cooperation works because it is the optimal strategy, that's why it is sustained
   - that's why we want cooperation in computer systems
   - so build the system so cooperation is the only way it works,
     no way to win by cheating
     - e.g., blockchain
 - enforcement (e.g., patrolling cells)
   - with computers, who is doing the bad things? identification is much harder
 - economic perspective: cooperation gives a better payoff
 - cooperation good way to maintain efficiency in distributed computing systems
   - just show that it would be worse if you betray, no need to enforce,
     agents will behave better with right incentives


Note these papers are about observations & theories
 - note the theories do not necessarily follow from the observations

When you see a complex system, we can ask
 - how does it work?
 - how was it made?  <--- evolutionary theories tend to focus on this

from Michod
- fragmentation is just computers/programs doing their own thing
- aggregation is like software dev - combining a bunch of parts to make something that can be distributed
- zygote/spore is like booting/orchestration - making a complex system
  from a much simpler one
  - big difference is trust, esp when considering microbiomes

- we don't understand how living systems bootstrap themselves
  - how does the microbiome get going?
- with computer systems, we often don't understand the booting process either!

it isn't that computers are exactly like living systems
 - but in both we have to solve similar problems

coordination & cooperation
 - trust in distributed computation
   - google systems vs oceanstore, untrusted systems
 - symbiosis, Margulis