COMP 3000 2012 Week 9 Notes: Difference between revisions

From Soma-notes
Cdelahou (talk | contribs)
added initial notes.
 
Cdelahou (talk | contribs)
formatting
Line 1: Line 1:
PACKAGING
== PACKAGING ==


*Fakeroot fakes root permissions
*Fakeroot fakes root permissions
Line 24: Line 24:




VIRTUALISATION
== VIRTUALISATION ==


* Convert OS to "Apps" (or app container)
* Convert OS to "Apps" (or app container)
Line 32: Line 32:
Two approaches: software and hardware
Two approaches: software and hardware


Softare:
Software:
  * limited emulations that emulates hardware. Emulation is slow though cause you're
* limited emulations that emulates hardware. Emulation is slow though cause you're
simulating hardware. Can reduce to physics emulation.
simulating hardware. Can reduce to physics emulation.
* we only emulate priviledged operations. For example, interrupts
* we only emulate priviledged operations. For example, interrupts


Hardware virtualisation:
Hardware virtualisation:
  * the "chip" itself is virtualisazing
* the "chip" itself is virtualisazing
  * the hardware intercepts interrupts and then allows the OS to run on the
* the hardware intercepts interrupts and then allows the OS to run on the
computer's CPU
computer's CPU



Revision as of 17:29, 8 November 2012

PACKAGING

  • Fakeroot fakes root permissions
  • Does so by faking a library the called libraries. THese ones return root permissions

Messing with Dynamic linker

  • We need fakeroot because tar feels save permissions


Package management exists on MACOS and windows, but exists more for patching the OS instead of incremental change. The reason they don't have a complicated PKG mangment is because they don't have DEBIAN's distributed development culture. They're more tightly nit, so big changes happen in a less formal, more verbal way.


/var is current state /var/lib is important state. Don't mess with it /var/lib/dpkg is where package management state exists. (see status file, status line)

/var/lib/dpkg/info/*.list --lists all files that are installed with a package (dpgk --list <packagename>)

/var/lib/dpkg/info/*.conffile --lists where the overridable conf files are...


VIRTUALISATION

  • Convert OS to "Apps" (or app container)
  • Oses, though, want to run in Supervisor mode, they're typically designed to be in

controll

Two approaches: software and hardware

Software:

  • limited emulations that emulates hardware. Emulation is slow though cause you're

simulating hardware. Can reduce to physics emulation.

  • we only emulate priviledged operations. For example, interrupts

Hardware virtualisation:

  • the "chip" itself is virtualisazing
  • the hardware intercepts interrupts and then allows the OS to run on the

computer's CPU

Hypervisors

  • Hardware is developed know that it will be controlled by multiple masters (OSes)
  • Sets Policy on how hardware is shared between OSes


History of hypervisors

  • Developed by IBM, in the 60's, for their mainframes.
  • Mainframes originally built for batching
  • Timeshare was then developed
  • During the transition, clients wanted to run their old batching code
  • IBM developped hypervisors so that BATCH OS's could be run on the same hardware as

the time sharing machine

Desktop Virtualisation

  • Typical desktop hardware, though, is designed to be used by only OS, so the

hypervisor must do tricks.

    • for example, one OS will set the Videocard registers to one setting, another

OS will set them to other settings

  • Desktop hardware is not so virtualizable.
  • where not virtualisable, emulate virtualisation. This is what VMware figured out

how to do. It emulated support for unvirtualiasable hardware, it did so in software.

  • Modern CPUs support virtualisation though, it lets you catch software interrupts and

redirect them to other OSes

  • HW that doesn't have virtualisation support, like some Graphic cards, VMs must

emulate them. THis is typically slow.