DistOS 2015W Session 5

From Soma-notes
Jump to navigation Jump to search

The Clouds Distributed Operating System

It is a distributed OS running on a set of computers that are interconnected by a group of network. It basically unifies different computers into a single component.

The OS is based on 2 patterns:

  • Message Based OS
  • Object Based OS

Object Thread Model

The structure of this is based on object thread model. It has set of objects which are defined by the class. Objects respond to messages. Sending message to object causes object to execute the method and then reply back.

The system has active objects and passive objects.

  1. Active objects are the objects which have one or more processes associated with them and further they can communicate with the external environment.
  2. Passive objects are those that currently do not have an active thread executing in them.

The content of the Clouds data is long lived. Since the memory is implemented as a single-level store, the data exists forever and can survive system crashes and shut downs.

Threads

The threads are the logical path of execution that traverse objects and executes code in them. The Clouds thread is not bound to a single address space. Several threads can enter an object simultaneously and execute concurrently. The nature of the Clouds object prohibits a thread from accessing any data outside the current address space in which it is executing.

Interaction Between Objects and Threads

  1. Inter object interfaces are procedural
  2. Invocations work across machine boundaries
  3. Objects in clouds unify concept of persistent storage and memory to create address space, thus making the programming simpler.
  4. Control flow achieved by threads invoking objects.

Clouds Environment

  1. Integrates set of homogeneous machines into one seamless environment
  2. There are three logical categories of machines- Compute Server, User Workstation and Data server.


Plan 9

Plan 9 is a general purpose, multi-user and mobile computing environment physically distributed across machines. Development of the system began in the late 1980s. The system was built at Bell Labs - the birth place of Unix. The original Unix OS had no support for networking, and there were many attempts over the years by others to create distributed systems with Unix compatibility. Plan 9, however, is distributed systems done following the original Unix philosophy.

The goals of this system were:

  1. To build a distributed system that can be centrally administered.
  2. To be cost effective using cheap, modern microcomputers.

The distribution itself is transparent to most programs. This is made possible by two properties:

  1. A per process group namespace.
  2. Uniform access to most resources by representing them as a file.

Unix Compatibility

The commands, libraries and system calls are similar to that of Unix and therefore a casual user cannot distinguish between these two. The problems in UNIX were too deep to fix but still the various ideas were brought along. The problems addressed badly by UNIX were improved. Old tools were dropped and others were polished and reused.


Similaritieis with the UNIX

  • shell
  • Various C compilers

Unique Features

What actually distinguishes Plan 9 is its organization. Plan 9 is divided along the lines of service function.

  • CPU services and terminals use same kernel.
  • Users may choose to run programs locally or remotely on CPU servers.
  • It lets the user choose whether they want a distributed or centralized system.

The design of Plan 9 is based on 3 principles:

  1. Resources are named and accessed like files in hierarchical file system.
  2. Standard protocol 9P.
  3. Disjoint hierarchical provided by different services are joined together into single private hierarchical file name space.

Virtual Namespaces

In a virtual namespace, a user boots a terminal or connects to a CPU server and then a new process group is created. Processes in group can either add to or rearrange their name space using two system calls - mount and bind.

  • Mount is used to attach new file system to a point in name space.
  • Bind is used to attach a kernel resident (existing, mounted) file system to name space and also arrange pieces of name space.
  • There is also unbind which undoes the effects of the other two calls.

Namespaces in Plan 9 are on a per-process basis. While everything had a way reference resources with a unique name, using mount and bind every process could build a custom namespace as they saw fit.

Since most resources are in the form of files (and folders), the term namespace really only refers to the filesystem layout.

Parallel Programming

Parallel programming was supported in two ways:

  • Kernel provides simple process model and carefully designed system calls for synchronization.
  • Programming language supports concurrent programming.

Legacy

Even though Plan 9 is no longer developed, the good ideas from the system still exist today. For example, the /proc virtual filesystem which displays current process information in the form of files exists in modern Linux kernels.

Google File System

It is scalable, distributed file system for large, data intensive applications. It is crafted to Google's unique needs as a search engine company.

Unlike most filesystems, GFS must be implemented by individual applications and is not part of the kernel. While this introduces some technical overhead, it gives the system more freedom to implement or not implement certain non-standard features.

Link to an explanation on how GFS works [1]

Architecture

The architecture of the Google file system consists of a single master, multiple chunk-servers and multiple clients. Chunk servers are used to store the data in uniformly sized chunks. Each chunk is identified by globally unique 64 bit handle assigned by master at the time of creation. The chunks are split into 64KB blocks, each with its own hashsum for data integrity checks. The chunks are replicated between servers, 3 by default. The master maintains all the file system metadata which includes the namespace and chunk location.

Each of the chunks is 64 MB large (contrast this to the typical filesystem sectors of 512 or 4096 bytes) as the system is meant to hold enormous amount of data - namely the internet. The large chunk size is also important for the scalability of the system - the larger the chunk size, the less metadata the master server has to store for any given amount of data. With the current size, the master server is able to store the entirety of the metadata in memory, increasing performance by a significant margin.

Operation

Master and Chunk server communication consists of

  1. checking whether there any chunk-server is down
  2. checking if any file is corrupted
  3. deleting stale chunks

When a client wants to do some operations on the chunks

  1. it first asks the master server for the list of servers that store the parts of a file it wants to access
  2. it receives a list of chunk servers, with multiple servers for each chunk
  3. it finally communicates with the the chunk servers to perform the operation

The system is geared towards appends and sequential reads. This is why the master server responds with multiple server addresses for each chunk - the client can then request a small piece from each server, increasing the data throughput linearly with the number of servers. Writes, in general, are in the form of a special append system call. When appending, there is no chance that two clients will want to write to the same location at the same time. This helps avoid any potential synchronization issues. If there are multiple appends to the same file at the same time, the chunk servers are free to order them as they wish (chunks on each server are not guaranteed to be byte-for-byte identical). Changes may also be applied multiple times. These issues are left for the application using GFS to resolve themselves. While a problem in the general sense, this is good enough for Google's needs.

Redundancy

GFS is built with failure in mind. The system expects that at any time, there is some server or disk that is malfunctioning. The system deals with the failures as follows.

Chunk Servers

By default, chunks are replicated to three servers. This exact number depends on the application in question doing the write. When a chunk server finds that some of its data is corrupt, it grabs the data from other servers to repair itselfTemplate:Citation needed.

Master Server

For efficiency, there is only a single live master server at a time. While not making the system completely distributed, it avoid many synchronization problems and suits Google's needs. At any point in time, there are multiple read-only master servers that copy metadata from the currently live master. Should it go down, they will serve any read operations from the clients, until one of the hot spares is promoted to being the new live master server.

Server:Stateless

  • the servers does not store states about clients
  • no caching at client either
    • since most program only cares about the output
    • if client wants up-to-date result, rerun the program
  • use heartbeat messages to monitor servers
    • good for system with assumption that changes (or failures) are often