EvoSec 2025W Lecture 19: Difference between revisions
Created page with "==Readings== * [https://homeostasis.scs.carleton.ca/~soma/pubs/findlay-ccsw2020.pdf Findlay, "bpfbox: Simple Precise Process Confinement with eBPF." (CCSW 2020)] * [https://homeostasis.scs.carleton.ca/~soma/pubs/findlay-bpfcontain2021.pdf Findlay, "Bpfcontain: Fixing the soft underbelly of container security." (arXiv 2021)] ==Discussion Questions== * Is the complexity of Linux security mechanisms due more to functional requirements or evolutionary processes? * What is..." |
|||
Line 10: | Line 10: | ||
==Notes== | ==Notes== | ||
<pre> | |||
Lecture 19 | |||
---------- | |||
G1 | |||
- complexity more from evolution, not design | |||
- open source software dev follows patterns of evolution | |||
- confinement is a tool for trust: more confinement, less trust | |||
G2 | |||
- complexity comes more from functional requirements | |||
- individuals find problem with existing mechanisms and then add something | |||
to fix things | |||
- adding leads to complexity | |||
- you have to trust that the confinement mechanisms work | |||
G3 | |||
- complexity more comes from evolution not design | |||
- you have to keep modifying what you have | |||
- complexity arises from the need for compatibility, not change what is there | |||
- specifying confinement is a reverse way of producing a model of behavior | |||
- no need to understand any particular user's behavior pattern | |||
confinement as a problem | |||
- limiting what code can do | |||
- limits are in part based on trustworthiness of code | |||
confinement isn't absolute | |||
- because that limits cooperation, integration | |||
but it is necessary | |||
- because otherwise errors and attacks propagate | |||
- too difficult for developers to understand, work with | |||
- also too complex, spaghetti code | |||
Confinement is a fundamental property of modern operating systems | |||
- files separate data | |||
- processes separate code execution | |||
process is a running program | |||
- own virtual cpu | |||
- own virtual memory | |||
traditional operating systems don't do that good of a job of confining processes | |||
- because they are often used as components in larger computations | |||
- shared files, pipes, sockets, shared memory | |||
but what if you want to run programs that you don't trust? | |||
- full confinement => sandboxing (e.g., JavaScript sandbox) | |||
Originally OS virtualization was just a way to share a kernel between multiple userlands from different individuals (e.g., web hosting) | |||
Containers became popular as a means for deploying software | |||
- for systems administration purposes, not security | |||
- because a container has all the local dependencies for any app | |||
devops is enabled by containers | |||
- developers make the containers, and those containers can be directly deployed | |||
virtual machines became the unit of resource allocation and security | |||
- mimicking the boundaries of a physical computer | |||
Why do we need a hypervisor to multiplex kernels for security, can't we just | |||
have a kernel multiplex and confine containers securely? | |||
so why didn't we push bpfcontain more? | |||
</pre> |
Latest revision as of 17:40, 20 March 2025
Readings
- Findlay, "bpfbox: Simple Precise Process Confinement with eBPF." (CCSW 2020)
- Findlay, "Bpfcontain: Fixing the soft underbelly of container security." (arXiv 2021)
Discussion Questions
- Is the complexity of Linux security mechanisms due more to functional requirements or evolutionary processes?
- What is the relationship between trust and confinement?
Notes
Lecture 19 ---------- G1 - complexity more from evolution, not design - open source software dev follows patterns of evolution - confinement is a tool for trust: more confinement, less trust G2 - complexity comes more from functional requirements - individuals find problem with existing mechanisms and then add something to fix things - adding leads to complexity - you have to trust that the confinement mechanisms work G3 - complexity more comes from evolution not design - you have to keep modifying what you have - complexity arises from the need for compatibility, not change what is there - specifying confinement is a reverse way of producing a model of behavior - no need to understand any particular user's behavior pattern confinement as a problem - limiting what code can do - limits are in part based on trustworthiness of code confinement isn't absolute - because that limits cooperation, integration but it is necessary - because otherwise errors and attacks propagate - too difficult for developers to understand, work with - also too complex, spaghetti code Confinement is a fundamental property of modern operating systems - files separate data - processes separate code execution process is a running program - own virtual cpu - own virtual memory traditional operating systems don't do that good of a job of confining processes - because they are often used as components in larger computations - shared files, pipes, sockets, shared memory but what if you want to run programs that you don't trust? - full confinement => sandboxing (e.g., JavaScript sandbox) Originally OS virtualization was just a way to share a kernel between multiple userlands from different individuals (e.g., web hosting) Containers became popular as a means for deploying software - for systems administration purposes, not security - because a container has all the local dependencies for any app devops is enabled by containers - developers make the containers, and those containers can be directly deployed virtual machines became the unit of resource allocation and security - mimicking the boundaries of a physical computer Why do we need a hypervisor to multiplex kernels for security, can't we just have a kernel multiplex and confine containers securely? so why didn't we push bpfcontain more?