DistOS 2018F 2018-11-26

From Soma-notes
Jump to navigation Jump to search

Readings

Containers & Orchestration

Notes

Lecture Nov 26 - In-class notes

language-based virtualisation
|
|
|
|
p p p | p p | p p
r r r | r r | r r
o o o | o o | o o
c c c | c c | c c
e e e | e e | e e
s s s | s s | s s
s s s | s s | s s

------------------------
kernel | kernel | kernel
------------------------
hypervisor
------------------------
hardware


Binary compilation on the fly, with JavaScript, want to run arbitrary things asm.js and native client. Web assembly is java virtual machine byte codes, instead of a byte code language to run java it is designed to run almost anything. Environment for running code. Hardware virtual machines (hyper-visors) --> x86 was only one kernel run on it, took classic x86, the CPUs did not have hardware virtualisation built-in. So they inserted software, binaries on the fly, so it wouldn’t do things but now, modern processes support this. Page tables are managed by every kernel so would have to have a page table of page tables (shadow page tables). Hardware virtualisation, the CPU itself can walk through those two levels. Faster in hardware but we can do it all in software but slower. Hypervisor is significantly faster but there is always a memory overhead, b/c you have chunkiness. So need more memory to do this.

OS level virtualisation: Puts processes into name-spaces, groups them together so that all the processes cannot see each-other...chrooted environment ---> i.e. FTP servers Jails, kernels tweaked so that you cannot get out of your isolation.

With hypervisors, have a nice clean hardware like interface to a kernel, lower level abstraction so simpler so can multiplex between kernels but the problem, running multiple kernels which is always overhead and one kernel for each separation you have done. Virtual appliances...kinda dumb and waistful and need integration between them to share resources between them. Webhost is OS level virtualisation. Kernel mods to run separate user lands so can be root on web server but still on the same kernel despite being separated.

With containers --> what is a container...is OS level virtualisation, just the grouping but managed .... how do you get your application running in production. Why does apps break when you send it to another environments...libraries forgot to be listed as a dependency or needed a config. environment...build apps, not isolated, not packaged together.

Linux distros, packages management...lots of work to do correctly...people deploying the packages are not the developer of the apps...specialists in the distro to package for the distro...i.e. people at Ubuntu....containers, let get away from that...the apps + all dependencies, files etc. An OS image to deploy into containers.

Developer can make for own machine and then distribute to a cluster and it runs properly! That is great!

Instead of process migration or entire virtual machines with kernels, doing containers. Light enough weight yet encompass all the dependencies.

To deploy a distributed app...components are containers...service oriented architecture...instead of one process across multiple systems....kubernetes, how you describe containers, how deployed and talk to each-other....grow based on load, instances etc...different containers and it orchestrates the running of all the instances. Is the kubernetes infrastructure trusted or un-trusted...highly trusted, within a trusted environment....security story with containers.

Hypervisor, more isolation, there are attacks to breach the barrier between them but AWS are based on running VM machines between different customers, each get their own kernel but Amazon does it to enforce strong isolation but a customer will use containers. Containers should not be from un-trusted sources...if a container is attacked by another, all bets are off...why you implement a separate kernel...infrastructure as a service provider...run a VM, full Kernel for Linux or Windows, to control but they don’t trust you so you are isolated but can run kubernetes on top of it b/c deploying the entire kubernetes infrastructure.

Docker--> the first to make containers, OS level virtualisation and now everyone is doing it. There are competitors...want to make containers and setup to run containers in response to load, monitoring and specify rules with kubernetes.

Abstraction with OS that actually works. Does Google use it, no. What do they use instead...Borg ... all the problems that Borg has, why do they not use Kubernetes despite being better engineered...why do they still use Borg...b/c they started with Borg and using a distributed systems with Borg...change infrastructure over, transition over to it is difficult so most of their infrastructure is still running Borg....better at other places b/c they are still using legacy stuff... all broken, quirky and messed up....nice and clean at other places. Folks at Amazon, actually have b/c everything is separated...service oriented architecture, replace the services as they go.

Containers are a Linux technology, most run Debian user-level. Different container formats, full power of Linux, different ways to do it. Linux API is become the default execution environment. The bad thing about it, now have to provide the entire OS...if leave a mess inside, it might work but what about security updates...user-land so any libraries inside there are frozen...how do you update the container...deploy a new container...treat it as immutable...functional-like...managing state....application is what you manage and the application is just a container.

Dev-ops...what the developer does is what goes into production...that is good but potentially not good if they do a silly thing....makes updating infrastructure easier and updating containers becomes minimal...to support whatever features you have in the OS. Kernel backwards compatibility at the system call.

Developer, responsible for deploying OSs and when there is a problem, breaks, they will come back to you...all they know what to do is to run it.

Ties into server-less computing. Will stick around b/c this feels durable, seems like it works. Uses old concepts and reuses them inside a container like users and groups inside the container b/c again it is a whole OS.

This is the future of deploying applications. The new trend in computing. Load-balanced etc. This is how you deploy a web app and everything is becoming a web app. Containers may change but in an evolutionary way, what OS runs inside of it. The Linux user land might change...web assembly binaries directly supported.