EvoSec 2025W Lecture 18: Difference between revisions
Created page with "==Readings== * [https://homeostasis.scs.carleton.ca/~soma/pubs/dabbour-nspw2020.pdf Dabbour, "Towards In-Band Non-Cryptographic Authentication." (NSPW 2020)] * [https://homeostasis.scs.carleton.ca/~soma/pubs/bfoster-gecco-2010.pdf Foster, "Object-Level Recombination of Commodity Applications." (GECCO 2010)] ==Notes== <pre> Lecture 18 ---------- G1 - can be more complex to detect imposters in practice because to do so because 1) you won't consider it a possibility, and..." |
(No difference)
|
Latest revision as of 18:36, 18 March 2025
Readings
- Dabbour, "Towards In-Band Non-Cryptographic Authentication." (NSPW 2020)
- Foster, "Object-Level Recombination of Commodity Applications." (GECCO 2010)
Notes
Lecture 18 ---------- G1 - can be more complex to detect imposters in practice because to do so because 1) you won't consider it a possibility, and 2) you'd have to act weird - AI chatbots can immitate people given chat history, that could defeat detection attempts - shared history may be the strongest authenticator but isn't practical (like narrative auth) G2 - What's the connection between these two? Seemed obscure - create new and identifiable contexts for security - security context from code diversity vs shared knowledge/observations - computer-to-computer is not like people-to-people communication, is it even feasible to distinguish them? - similar to encryption, shared secret, but secret is shared context - how complex of models would be required for authentication between computers? G4 - doesn't computer behavior boil down to protocols, so not so much opportunity for unknown shared context? - if one host is compromised, it can be immitated using stolen data - compromised communication allows models to be built up over time - how does the link resolver work?! - is genetic recombination practical? - can you really get more complexity over many generations? - what is the similarity between the papers? G3 - knowing the attacker could be there biases the conversation - if you did a more "real world" experiment, would people detect impersonation if not primed? suspect they won't - in the real world users, if someone knows they've been hacked they have other ways of communicating this fact - if defender can train model, attacker can also, and your behavior is harder to change than a password - no mutation of object files, so is this evolution? - how can we do mutation here that would generate novelty? - don't we still need people? How can this be fully automated? I don't see these papers as practical, but evocative - how do people recognize each other when limited to text? - can we have programs sexually reproduce like biological organisms, without being designed for this? don't mistake the abstraction for the implementation - computer-to-computer "conversational" auth would have models of implementation & context-specific details - precise program versions - communication details