Operating Systems 2017F Lecture 18

From Soma-notes
Revision as of 16:29, 16 November 2017 by Krithikasaravanan (talk | contribs)
Jump to navigation Jump to search

Video

The video from the lecture given on Nov. 16, 2017 is now available.

Notes

In Class

Lecture 18: Filesystems and such
--------------------------------
* How can you recover a filesystem?
* How do you delete a file?

A filesystem is
 * persistent data structure
 * stored in fixed-sized blocks (at least 512 bytes in size)
 * maps hierarchical filenames to file contents
 * has metadata about files (somehow)

What's in a filesystem?
 * data blocks
 * metadata blocks

How do you organize metadata?

First job: identify basic characteristics of the filesystem

You need a "summary" block that tells you about everything else
 => this is the "superblock"

Normally the superblock is the first block of the filesystem

In the superblock
 - what kind of filesystem is this?
    - what filesystem magic number is there
 - how big is the filesystem?
 - how is it organized?
 - where can I find the rest of the metadata?

for POSIX filesystems
 - file metadata is stored in...inodes
 - most have pre-reserved inodes

So we have
 - superblock
 - inode blocks
 - data blocks
   - data blocks for directories
   - data blocks for files

How do you recover from damage?
 - filesystems never "reboot", must remain correct over
   the course of years
 - but errors will happen
   - bitrot
   - "accidental" corruption
   - computer failure/memory corruption/hard reboot

To make filesystems fast, data & metadata is cached in RAM
 - bad things happen if this data hasn't been written to disk and you reboot
 - even worse things happen if your RAM is bad and corrupts the data

Also bad...what if we lose the superblock?
 - you could lose EVERYTHING
 - so we have backup superblocks

Old scandisk/fsck was slow because they had to scan all filesystem metadata
 - not to recover data, but to fix metadata

Nowadays fsck is very fast and we rarely lose data due to losing power
 - we must be writing data to disk all the time
 - but isn't writing all the time slow?

On magnetic hard disks (not SSDs)
 - sequential operations are fast
 - random access is slow
   - we have to move the read/write head

So, on modern systems we update metadata (and sometimes data) by writing
sequentially to disk...and then later writing randomly
 - sequential writes go to the "journal"

On fsck on a journaled filesystem
 - just check the journal for pending operations (replay the journal)

There exist filesystems that are pure journal
 - log-based filesystem

logs and journal inherently create multiple copies of data and metadata that are hard to track.  This makes deletion nearly impossible (at least to guarantee)

Only way to guarantee...encrypt everything
 - if every file has its own key, you can delete the key and thus "delete" the data

Solid State Disks (SSD) use log-structured storage at a level below blocks.
 - writes are coarse-grained (you have to write a lot at once)
 - you don't want to write to the same cells too often, they'll die
   - have to do "wear-leveling"


Additional Notes

Lec 18

  • More on filesystems
  • How can you recover a fs and how do you delete a file?


A filesystem is a:

  • Persistent data structure
  • Stored in fixed size blocks (at least 512 bytes in size)
  • Maps hierarchical filenames to file contents
  • Has metadata about files somehow


What's in a filesystem

  • data blocks (stores file content)
  • metadata blocks, you need someway to find the blocks


How do you organize metadata?

First identify basic characteristics of the filesystem
- How big is the filesystem?
- What is the block size?

How do we differentiate between this and other filesystems?
You need a "superblock" which is a "summary" block that tells you about everything else

- Format depends on filesystem
- Normally the superblock is the first block of the filesystem

- Think of it almost like the root of a binary tree
In the superblock

  • Type of filesystem
    • What filesystem magic number is there (lets us identify one filesystem from another just by looking at the first block)
    • File command to know file type
  • Size of the filesystem
  • How the filesystem is organized (different filesystems organize their data differently)
  • Where can I find the rest of the metadata

He opened a .jpg as a binary file to show us the magic number in a file, first several bytes identify type of file.
- Kernel does not care about file extension but userspace programs may care about the extension.
- File extensions are only really useful for the people looking at them - Typical for binary file formats to have a set of bytes that identify the type of file
POSIX is a standard for maintaining compatibility between operating systems
- QNX, UNIX, MacOS are all POSIX compliant - Others comply on a varying scale For POSIX filesystems

  • File metadata is stored in INODES
  • When you create a filesystem, certain blocks are dedicated to being INODES

- Possible to have space in your filesystem without being able to store things if you run out of INODES

What is usenet? - A worldwide distributed discussion system (stone age version of reddit)
- Deprecated now because it could not handle the spam people uploaded into it, lol
- Format for usenet was every message stored in its own file => lots of small files
- Everyone has a local usenet server, access to read posts on the forum
- To post a message, send to local server which replicates it and sends it to all the other servers
So we have:

  • superblock
  • inode blocks
  • data blocks
    • data blocks for directories
    • data blocks for files


How do you recover from damage?

  • Filesystems never "reboot", must remain correct over time
  • Errors will happen: bitrot (when bits change), accidental corruption, computer failiure/memory corruption/hard reboot


To make filesystems fast, data and metadata are cached in RAM

  • Bad things happen if this data hasn't been writen to disk and you reboot
  • Even worse things happen if your RAM is bad and corrupts the data
  • FSCK is like scandisk in Windows 98 (this only happens when you do a hard reset)


What happens if you lose the superblock?

  • You could lose EVERYTHING
  • Node trunc dd command blew away first bytes of the file system so you could not mount it because you corrupted the superblock. However, fsck fixed this because we have backup superblocks :D

- Most filesystems keep copies of the superblock in random locations throughout which takes up some unusable amount of data
- But this is an impractical way to deal with data blocks

Old scandisk/fsck was slow because we they had to scan all filesystem metadata

  • Not to recover data, but to fix metadata
  • lost+found might have some files that you might recover. It is a part of the filesystem for fsck to use.


Nowadays fsck is very fast and we rarely lose data due to losing power

  • What this means is we must be writing to disk all the time
  • But isn't writing slow? All writes aren't slow particularly on conventional hard disks.


On Magnetic Hard Disks (not SSD's)

  • sequential oeprations are fast
  • random access is slow
    • we have to move the read/write head


So, on modern systems we update metadata (and sometimes data) by writing sequentially to disk...and then later writing randomly

  • Sequential writes go to the journal


On fsck on a journaled filesystem

  • just check the journal for pending operations (replaying the journal)

There exists filesytems for optimizing writes that are pure journal

  • log-based filesystem


Logs and journal inherently create multiple copies of data and metadata that are hard to track. This makes deletion nearly impossible (at least to guarantee)

Only way to garauntee...encrypt everything

  • if every file has its own key, you can delete the key and this "delete" the data


SSD's use log-structured sotrage at a level below blocks

  • Writes are coarse-grained (you have to write a lot at once)
  • You don't want to write to the same cells too often, they will die
    • You have to do "wear-leveling"