<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Rarteaga</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Rarteaga"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Rarteaga"/>
	<updated>2026-04-22T14:08:13Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6949</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6949"/>
		<updated>2010-12-03T11:06:12Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Critique: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay. It is essential for one to comprehend the basic concepts of the language that is spoken in the paper in order to fully understand the ideas that are being discussed. The most important notions discussed in the FlexSC paper that are at the core of it all, are System calls[21], and Sychronous systems. These base definitons along with numerous other helpful ideas can be understood through the section of the paper to follow. &lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
In order to fully understand the FlexSC paper, it is essential to understand the key concepts that are discussed within the paper. Here listed below, are the main concepts required to fully comprehend the paper. It is more vital to the reader to understand the core ideas of these definitions, along with the underlying motivation for their existence, than to understand the miniscule details of their processes. &lt;br /&gt;
&lt;br /&gt;
===System Call===&lt;br /&gt;
A &amp;lt;b&amp;gt;System Call&amp;lt;/b&amp;gt; is the gateway between the User Space and the Kernel Space. The User Space is not given direct access to the Kernel&#039;s services, for several reasons (one being security), hence System calls are the messengers between the User and Kernel Space.[1][4]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode Switch===&lt;br /&gt;
&amp;lt;b&amp;gt;Mode Switches&amp;lt;/b&amp;gt; speak of moving from one medium to another. Specifically moving from the User Space mode to the Kernel mode or Kernel mode to User Space. It does not matter which direction or which modes we are switching from, this is simply a general term. Crucial to mode switching is the &amp;lt;b&amp;gt;mode switch time&amp;lt;/b&amp;gt; which is the time necessary to execute a system call instruction in user-mode, perform the kernel mode execution of the system call, and finally return the execution back to user-mode.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Synchronous System Call===&lt;br /&gt;
&amp;lt;b&amp;gt;Synchronous Execution Model(System call Interface)&amp;lt;/b&amp;gt; refers to the structure in which system calls specifically are managed in a serialized manner. Moreover, the synchronous model completes one system call at a time, and does not move onto the next system call until the previous system call is finished executing. This form of system call is blocking, meaning the process which initiates the system call is blocked until the system call returns. Traditionally, operating system calls are mostly synchronous system calls.[1][2]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Asynchronous System Call===&lt;br /&gt;
An &amp;lt;b&amp;gt;asynchronous system call&amp;lt;/b&amp;gt; is a system call which does not block upon invocation; control of execution is returned to the calling process immediately. Asynchronous system calls do not necessarily execute in order and can be compared to event driven programming.[2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Pollution===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Pollution&amp;lt;/b&amp;gt; is a more sophisticated manner of referring to wasteful or un-necessary delay in the system caused by system calls. This pollution is in direct correlation with the fact that the system call invokes a mode switch which is not a costless task. The &amp;quot;pollution&amp;quot; involved takes the form of data over-written in critical processor structures like the TLB (translation look-aside buffer - table which reduces the frequency of main memory access for page table entries), branch prediction tables, and the cache (L1, L2, L3).[1][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Pipeline Flushing===&lt;br /&gt;
The regular operation of a CPU has multiple instructions being fetched, decoded and executed at the same time.  The parallel processing of instructions provides a significant speed advantage in processing.  During a mode switch, however, instructions in the  user-mode pipeline are flushed and removed from the processor registers.[1]  These lost instructions are part of the cost of a mode switch.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Processor Exceptions===&lt;br /&gt;
&amp;lt;b&amp;gt;Processor exceptions&amp;lt;/b&amp;gt; are situations which cause the processor to stop current execution unexpectedly in order to handle the issue. There are many situations which generate processor exceptions including undefined instructions and software interrupts(system calls).[5]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Batching&amp;lt;/b&amp;gt; is the concept of collecting system calls together to be executed in a group instead of executing them immediately after they are called.[6]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Temporal and Spatial Locality===&lt;br /&gt;
Locality is the concept that during execution there will be a tendency for the same set of data to be accessed repeatedly over a brief time period. There are two important forms of locality; &amp;lt;b&amp;gt; spatial locality&amp;lt;/b&amp;gt; and &amp;lt;b&amp;gt;temporal locality&amp;lt;/b&amp;gt;. Spatial locality refers to the pattern that memory locations in close physical proximity will be referenced close together in a short period of time. Temporal locality, on the other hand, is the tendency of recently requested memory locations to be requested again.[7][8]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Instructions Per Cycle (IPC)===&lt;br /&gt;
&amp;lt;b&amp;gt;Instructions per cycle&amp;lt;/b&amp;gt; is the amount of instructions a processor can execute in a single clock cycle.[9]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Translation Look-Aside Buffer (TLB)===&lt;br /&gt;
A TLB is a table used in a virtual memory system that lists the physical address page number associated with each virtual address page number. A TLB is used in conjunction with a cache whose tags are based on virtual addresses. The virtual address is presented simultaneously to the TLB and to the cache so that cache access and the virtual-to-physical address translation can proceed in parallel. If the requested address is not cached then the physical address is used to locate the data in main memory. &lt;br /&gt;
&lt;br /&gt;
The TLB is the reason context switches can have such large performance penalties. Every time the OS switches context, the entire buffer is flushed. When the process resumes, it must be rebuilt from scratch. Too many context switches will therefore cause an increase in cache misses and degrade performance.[17]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Lack of Locality ===&lt;br /&gt;
As per paper, locality refers to both types of locality, i.e. temporal and spatial, defined above. Thus, lack of locality here means data and instructions needed most frequently by the application continues to be switched back and forth (from registers and caches) due to system calls, attributing hence, to performance degradation.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Throughput ===&lt;br /&gt;
Is an indication of how much work is done during a unit of time. E.g. n transactions per hour. The higher n is, the better. [2. P151]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Regular Store Instructions ===&lt;br /&gt;
A store instruction simply refers to a typical assembly language instruction, where, usually, there are two arguments. A value, and a memory location, where that value should be stored.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Linux Application Binary Interface (ABI)===&lt;br /&gt;
The ABI is a patch to the kernel that allows you to run SCO, Xenix, Solaris ix86, and other binaries on Linux.[18]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Native POSIX Thread Library (NPTL)===&lt;br /&gt;
NPTL is a software component that allows the Linux kernel to run applications optimized for POSIX Thread efficiency.[19]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Syscall Page ===&lt;br /&gt;
A syscall page is a collection of syscall entries. In turn, a sysentry is a 64-byte data structure, which includes information such as syscall number, number of arguments,&lt;br /&gt;
the arguments, status, and return value [1].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Syscall Threads ===&lt;br /&gt;
Syscall threads is FlexSC&#039;s method to allow exception-less system calls. A syscall thread shares its process virtual address space [1].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Latency ===&lt;br /&gt;
Latency is a measure of the time delay between the start of an action and its completion in a system.[20]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
System calls provide an interface for user-mode applications to request services from the operating system. Traditionally, the system call interface has been implemented using synchronous system calls, which block the calling user-space process when the system call is initiated. The benefit of using synchronous system calls comes from the easy to program nature of having sequential operation. However, this ease of use also comes with undesireable side effects which can slow down the instructions per cycle (IPC) of the processor.[9] In &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, Soares and Stumm attempt to provide a new form of system call which minimizes the negative effects of synchronous system calls while still remaining easy to implement for application programmers.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The negative effects of synchronous system calls have been researched heavily, it is accepted that although easy to use, they are not optimal. Previous research includes work into &amp;lt;b&amp;gt;system call batching&amp;lt;/b&amp;gt; such as multi-calls[6], &amp;lt;b&amp;gt;locality of execution with multicore systems&amp;lt;/b&amp;gt;[7][8], and &amp;lt;b&amp;gt;non-blocking execution&amp;lt;/b&amp;gt;. System call batching shares great similarity with FlexSC as multiple system calls are grouped together to reduce the amount of mode switches required of the system.[6] The difference is multi-calls do not make use of parallel execution of system calls nor do they manage the blocking aspect of synchronous system calls. FlexSC describes methods to handle both of these situations as described in the &amp;lt;b&amp;gt;Contribution&amp;lt;/b&amp;gt; section of this document.[1] Previous research into locality of execution and multicore systems has focused on managing device interrupts and limiting processor interference associated with interrupt handling.[7][8] However, these solutions require a microkernel solution and although they can dedicate certain execution to specific cores of a system, they can not dynamically adapt to the proportion of cores used by the kernel and the cores shared between the kernel and the user like FlexSC can.[1] Non-blocking execution research has focused on threading, event-based (non-blocking) and hybrid solutions. However, FlexSC provides a mechanism to separate system call execution from system call invocation. This is a key difference between FlexSC and previous research.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
===Exception-Less System Calls===&lt;br /&gt;
Exception-less system calls are the research team&#039;s attempt to provide an alternative to synchronous systems calls. The downside to synchronous system calls includes the cumulative mode switch time of multiple system calls each called independently, state pollution of key processor structures (TLB, cache, etc.)[1][3], and, potentially the most crucial, the performance impact on the user-mode application during a system call. Exception-less system calls attempt to resolve these three issues through:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
1. &amp;lt;u&amp;gt;System Call Batching:&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Instead of having each system call run as soon as it is called, FlexSC instead groups together system calls into batches. These batches can then be executed at one time thus minimizing the     frequency of mode switches bewteen user and kernel modes. Batching provides a benefit both in terms of the direct cost of mode switching as well as the indirect cost, pollution of critical processor structures, associated with switching modes. System call batching works by first requesting as many system calls as possible, then switching to kernel mode, and then executing each of them.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2. &amp;lt;u&amp;gt;Core Specialization&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
On a multi-core system, FlexSC can provide the ability to designate a single core to run all system calls. The reason this is possible is that for an exception-less system call, the system call execution is decoupled from the system call invocation. This is described further in &amp;lt;b&amp;gt;Decoupling Execution from Invocation&amp;lt;/b&amp;gt; section below.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
3. &amp;lt;u&amp;gt;Exception-less System Call Interface&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
To provide an asynchronous interface to the kernel, FlexSC uses &amp;lt;b&amp;gt;syscall pages&amp;lt;/b&amp;gt;. Syscall pages are a set of memory pages shared between user-mode and kernel-mode. User-space threads interact with syscall pages in order to make a request (system call) for kernel-mode procedures. A user-mode thread may make a system call request on a free entry of a syscall page, the syscall page will then run once the batch condition is met and store the return value on the syscall page. The user-mode thread can then return to the syscall page to obtain the return value. Neither issuing the system call via the syscall page nor getting the return value from the syscall page generate a processor exception. Each syscall page is a table of syscall entries. These entries may have one of three states: &amp;lt;b&amp;gt;Free&amp;lt;/b&amp;gt; - meaning a syscall can be added to the entry; &amp;lt;b&amp;gt;Submitted&amp;lt;/b&amp;gt; - meaning the kernel can proceed to invoke the appropriate system call operations; and &amp;lt;b&amp;gt;Done&amp;lt;/b&amp;gt; - meaning the kernel is finished and the return value is ready for the user-mode thread to retrieve it.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
4. &amp;lt;u&amp;gt;Decoupling Execution from Invocation&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In order to separate a system call invocation from the execution of the system call, &amp;lt;b&amp;gt;syscall threads&amp;lt;/b&amp;gt; were created. The sole purpose of syscall threads is to pull requests from syscall pages and execute the request, always in kernel mode. This is the mechanic that allows exception-less system calls to provide the ability for a user-mode thread to issue a request and continue to run while the kernel level system call is being executed. In addition, since the system call invocation is separate from execution, a process running on one core may request a system call yet the execution of the system call may be completed on an entirely different core. This allows exception-less system calls the unique capability of having all system call execution delegated to a specific core while other cores maintain user-mode execution.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===FlexSC Threads===&lt;br /&gt;
As mentioned above, FlexSC threads are a key component of the exception-less system call interface. FlexSC threads transform regular, synchronous system calls into exception-less system calls and are compatible with both the POSIX and default Linux thread libraries. This means that FlexSC Threads are immediately capable of running multi-threaded Linux applications with no modifications. The intended use of these threads is with server-type applications which contain many user-mode threads. In order to accomodate multiple user-mode threads, the FlexSC interface provides a syscall page for each core of a system. In this manner, multiple user-mode threads can be multiplexed onto a single syscall page which in turn has a single kernel level thread to facilitate execution of the system calls. Programming with FlexSC threads can be compared to event-driven programming as interactions are not guaranteed to be sequential. This does increase the complexity of programming for an exception-less system call interface as compared to the relatively simple synchronous system call interface.[1][2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
FlexSC Threads implement an M-on-N threading model, where M is the number of user based threads and N is the number of syscall threads, M being greater than N [1]. This is similar to one of the common styles of implementing threads, referred to as M:N model [22], where M represents user space threads, and N represents Kernel threads. The M-on-N model however uses FlexSC’s exception-less system call mechanism (transparently), i.e. its inner workings differ, from the otherwise typical M:N model. The M-on-N model takes advantage of maximizing the number of user space threads switching. In other words, as long as a user space thread is ready, work will continue in user space, minimizing thus the time spent on blocking.&lt;br /&gt;
&lt;br /&gt;
===Implementation and Statistics===&lt;br /&gt;
Up until this point, we have discussed and illustrated the theory behind FlexSC&#039;s logic patterns. However, it is also as important to demonstrate the practical and statistical results during implementation. FlexSC&#039;s implementation has shown a very large leap from today&#039;s synchronous system call method currently in use. &lt;br /&gt;
Firstly, demonstration of the current synchronous systems statistics will be discussed. The data collected by the researchers show that the System Calls cause a large delay in the User IPC(Interprocess Communication) between 20% and 60%. To get a general idea of how many instructions are executed in one system call, we name a few with their corresponding number of instructions. According to the researchers of this paper, pread(), pwrite(), open+close() require 3739, 5689 and 6631 instructions respectively. Therefore, these system calls, are not minuscule lines of code, that can be overlooked, they consume a large amount of time, hence FlexSC&#039;s attempt to lower this overall value by methods mentioned above in the paper such as system call batching.  &lt;br /&gt;
&lt;br /&gt;
On a single core, the System-Less Batching produces an improved time from 55 nanoseconds to about 35 nanoseconds once 15 or more batch requests have been utilized. Moreover, the throughput(requests/sec) shows a dramatic increase of 10,000 and more in Apache throughput of Linux. This throughput increases with the more cores implemented in the system. This demonstrates the theory of how the FlexSC&#039;s main application improvements in systems implementing multiple cores.   &lt;br /&gt;
&lt;br /&gt;
Furthermore, on a 4 core system, 14% of the original 50% idle time is being used in the User Space now. Latency as well is decreased by half of the original time on 1,2 and 4 core implementations. Also very interesting test case the researchers investigated was the idea of having numerous requests occurring at once in the system. They chose to execute 256 synchronous requests in the system and what they came to find is that FlexSC&#039;s method is 60ms faster than the current synchronous method and continually increases with the more cores in use.   &lt;br /&gt;
&lt;br /&gt;
Through the different methods being executed by the FlexSC&#039;s concepts, the statistics taken of the system demonstrates that their theory does hold in practice. This illustrates that we can begin implementing this system into our computers we use every day and begin to see immediate results. Now this effect of the statistics occurs when multiple cores occur as well when batching occurs numerous times. Otherwise if these two factors are very small then no change occurs or even a decrease in efficiency can occur.&lt;br /&gt;
&lt;br /&gt;
== Critique: ==&lt;br /&gt;
&lt;br /&gt;
===Moore&#039;s Law===&lt;br /&gt;
One interesting aspect of this paper is how the research relates to Moore&#039;s Law. Moore&#039;s Law states that the number of transistors on a chip doubles every 18 months.[10]. This has lead to very large increases in the performance potential of software but at the same time has opened a large gap between the actual performance of efficient and inefficient software. This paper claims that the gap is mainly caused by disparity of accessing different processor resources such as registers, cache and memory.[1] In this manner, the FlexSC interface is not just an attempt to increase the efficiency of current system calls, but it is actually an attempt to change the way we view software. It is not simply enough to continue to build more powerful machines if the code we currently run will not speed up (become more efficient) along with the gain of power. Instead we need to focus on appropriate allocation and usage of the power as failure to do so is the origination of the gap between our potential and our performance.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Performance of FlexSC===&lt;br /&gt;
It is of particular interest to note that exception-less system calls only outperformed synchronous system calls when the system was running multiple system calls. For an individual system call, the overhead of the FlexSC interface was greater than a synchronous call. The real benefit of FlexSC comes when there are many system calls which can be in turn batched before execution. In this situation the FlexSC system far outperformed the traditional synchronous system calls.[1] This is why the research paper&#039;s focus is on server-like applications as server must handle many user requests efficiently to be useful. Thus, for a general case it appears that a hybrid solution of synchronous calls below some threshold and execption-less system calls above the same threshold would be most efficient.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Blocking Calls===&lt;br /&gt;
FlexSC relies on the fact that web and database servers have a lot of concurrency and independent parallelism. FlexSC can &#039;harvest&#039; enough independent work so that it doesn&#039;t need to track dependencies between system calls. However, this could be a problem in other situations. Since FlexSC system calls are &#039;inherently asynchronous&#039;, if they need to block, FlexSC would jump to the next system call and execute that one. This can cause a problem for system calls such as reading and writing, where the write call has an outstanding dependency on the read call. However, this could be resolved by using some kind of combined system call, that is, multiple system calls executed as one single call. Unfortunately, FlexSC does not have any current handling for such an implementation.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Core Scheduling Issues===&lt;br /&gt;
In a system with X cores, FlexSC needs to dedicate some subset of cores for system calls. Currently, FlexSC first wakes up core X to run a system call thread, and when another batch comes in, if core X is still busy, it will then try core X-1, and so on. Of all the algorithms they tested, it turned out that this, the simplest algorithm, was the most efficient algorithm for FlexSC scheduling. However, this was only tested with FlexSC running a single application at a time. FlexSC&#039;s scheduling algorithm would need to be fine-tuned for running multiple applications.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When There Are Not More Threads Then Cores===&lt;br /&gt;
In situations where there is a single thread using 100% of a CPU, and acting primarily in user-space, such as &#039;Scientific Programs&#039;, FlexSC causes more overhead then performance gains. As a result, FlexSC is not an optimal implementation for cases such as this.&lt;br /&gt;
&lt;br /&gt;
===IO === &lt;br /&gt;
FlexSC is not suited for data intensive, IO centric applications, as realized by Vijay Vasudevan [16]. Vijay&#039;s research aims to reduce the energy footprint in data centers. FlexSC&amp;lt;br&amp;gt;&lt;br /&gt;
was considered. It was found that FlexSC&#039;s reduction of mode switches, via the use of shared memory pages between user space and kernel space is useful for reducing the impact&amp;lt;br&amp;gt;&lt;br /&gt;
of system calls. That technique however was not useful for IO intensive work since it did not remove the requirement of data copying and did not reduce the overheads associated&amp;lt;br&amp;gt;&lt;br /&gt;
with interrupts in IO intensive tasks.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Some Kernel Changes Are Required===&lt;br /&gt;
Though most of the work is done transparently. i.e. there is no need for application&#039;s code modification, there remains a need for small kernel change (3 lines of code), as per section 3.2 of the paper [1].&amp;lt;br&amp;gt;&lt;br /&gt;
That means adopters, and after each update of the kernel, would have to add/modify the referenced lines and then recompile the kernel.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Multicore Systems ===&lt;br /&gt;
For a multicore system, the FlexSC scheduler will attempt to choose a subset of the available cores and specialize them for running system call threads. It is unclear how the dynamic&amp;lt;br&amp;gt;&lt;br /&gt;
allocation is done. It is mentioned that decisions are made based on the workload requirements, which doesn&#039;t exactly clarify the mechanism.&amp;lt;br&amp;gt;&lt;br /&gt;
Further, the paper mentions that a predefined, static list of cores is used for system call threads assignments. It is unclear when that list is created. Is it at installation time,&amp;lt;br&amp;gt;&lt;br /&gt;
is it generated initially, or does the installer have to do any manual work. On a related note, scalability with increased cores is ambiguous. It is not that clear how scalable the&amp;lt;br&amp;gt;&lt;br /&gt;
scheduler is. One gets the impression that it is very scalable due to the fact that each core spawns a system call thread. Thus, as many threads as there are cores could be running&amp;lt;br&amp;gt;&lt;br /&gt;
concurrently, for one or more processes [1]. More explicit results however would&#039;ve been beneficial. Further, the paper mentions that hyper-threading was turned off to ease the analysis&amp;lt;br&amp;gt;&lt;br /&gt;
of the results. Understandable, however, it would be nice to know if these threads (2 per core) would actually be treated as a core when turned on ? I.e. would the scheduler then realize&amp;lt;br&amp;gt;&lt;br /&gt;
that it can use eight cores ? Does that also mean the predefined static cores list would need to be modified, to list eight instead of four ?&amp;lt;br&amp;gt;  &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
Along the same reasoning, and given the growing popularity of GPU&#039;s use for general programming, it would&#039;ve been useful to at-least hypothesize on the possible performance&amp;lt;br&amp;gt;&lt;br /&gt;
outcome when using specialized GPUs, like NVIDIA&#039;s Tesla GPUs for example. Would FlexSC&#039;s scheduler be able to take advantage of the additional cores, and hence use them for&amp;lt;br&amp;gt;&lt;br /&gt;
specialized purposes ?&lt;br /&gt;
&lt;br /&gt;
===Style===&lt;br /&gt;
The overall reading of the paper was fairly understandable. Some deeper knowledge of System Calls and Synchronous systems is definitely recommended, almost necessary for this papers true comprehension. The paper was explained very well and even though we did need to research some specific background or similar topics, it was a well written paper. A reader with a base knowledge of Operation Systems can understand this paper for the core concept and know where to go to investigate further details and descriptions that are lacking in the paper.&lt;br /&gt;
&lt;br /&gt;
== Related Work: ==&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&lt;br /&gt;
Muti-calls is a concept which involves collecting multiple system calls and submitting them as a single system call. It is used both in operating systems and paravirtualized hypervisors. The Cassyopia compiler has a special technique name a looped multi-call, which is an additional process where the result of one system call can be fed as an argument to another system call in the same multi-call.[11] There is a significant difference between multi-calls and exception-less system calls. Multi-calls do not investigate parallel execution of system calls, nor do they address the blocking of system calls like exception-less system calls. Multi-call system calls are executed sequentially, each one must complete before the next may start. On the other hand, exception-less system calls can be executed in parallel, and in the presence of blocking, the next call can execute immediately.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Locality of Execution and Multicores===&lt;br /&gt;
&lt;br /&gt;
Several techniques addressed the issue of locality of execution. Larus and Parkes proposed Cohort Scheduling to efficiently execute staged computations.[12] Other techniques include Soft Timers[13] and Lazy Receiver[14] Processing which try to tackle the issue of locality of execution by handling device interrupts. They both try to limit processor interference associated with interrupt handling without affecting the latency of servicing requests. Another technique name Computation Spreading[15] is most similar to the multicore execution of FlexSC. Processor modifications that allow hardware migration of threads and migration to specialized cores. However, they did not model TLBs on current hardware synchronous thread migration is a costly interprocessor interrupt. Another solution has 2 difference between FlexSC. They require a micro-kernel. Also FlexSC can dynamically adapt the proportion of cores used by the kernel or cores shared by user and kernel execution dynamically. While all these solutions rely on expensive inter-processor interrupts to offload system calls, FlexSC could provide a more efficient, and flexible mechanism.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Non-blocking Execution===&lt;br /&gt;
&lt;br /&gt;
Past research on improving system call performance has focused extensively on blocking versus non-blocking behavior. Typically researchers used threading, event-based, which is non-blocking and hybrid systems to obtain high performance on server applications. The main difference between many of the proposals for non-blocking execution and FlexSC is that none of the non-blocking system calls have decoupled the system call invocation from its execution.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References: ==&lt;br /&gt;
[1] Soares, Livio and Michael Stumm, &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, University of Toronto, 2010.[http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Tanenbaum, Andrew S., &amp;lt;i&amp;gt;Modern Operating Systems: 3rd Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2008.&lt;br /&gt;
&lt;br /&gt;
[3] Stallings, William, &amp;lt;i&amp;gt;Operating Systems: Internals and Design Principles - 6th Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2009.&lt;br /&gt;
&lt;br /&gt;
[4] Garfinkel, Tim, &amp;lt;i&amp;gt;Traps and Pitfalls: Practical Problems in System Call Interposition Based Security Tools&amp;lt;/i&amp;gt;, Computer Science Department - Stanford University.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.144.2695&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[5] Yoo, Sunjoo &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Automatic Generation of Fast Timed Simulation Models for Operating Systems in SoC Design&amp;lt;/i&amp;gt;, SLS Group, TIMA Laboratory, Grenoble, 2002.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.1148&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[6] Rajagopalan, Mohan &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Cassyopia: Compiler Assisted System Optimization&amp;lt;/i&amp;gt;, Poceedings of HotOS IX: The 9th Workshop on Hot Topics in Operating Systems, Lihue, Hawaii, 2003.[https://www.usenix.org/events/hotos03/tech/full_papers/rajagopalan/rajagopalan.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[7] Kumar, Sanjeev and Christopher Wilkerson, &amp;lt;i&amp;gt;Exploiting Spatial Locality in Data Caches using Spatial Footprints&amp;lt;/i&amp;gt;, Princeton University and Microcomputer Research Labs (Oregon), 1998.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.1550&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[8] Jin, Shudong and Azer Bestavros, &amp;lt;i&amp;gt;Sources and Characteristics of Web Temporal Locality&amp;lt;/i&amp;gt;, Computer Science Depratment - Boston University, Boston. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.94.5941&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[9] Agarwal, Vikas &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Clock Rate versus IPS: The End of the Road for Conventional Microarhitechtures&amp;lt;/i&amp;gt;, University of Texas, Austin, 2000.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.3694&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[10] Tuomi, Ilkka, &amp;lt;i&amp;gt;The Lives and Death of Moore&#039;s Law&amp;lt;/i&amp;gt;, 2002.[http://131.193.153.231/www/issues/issue7_11/tuomi/ HTML]&lt;br /&gt;
&lt;br /&gt;
[11] BARHAM, P., DRAGOVIC, B., FRASER, K., HAND, S., HARRIS, T., HO, A., NEUGEBAUER, R., PRATT, I., AND WARFIELD, A. Xen and the art of virtualization. In Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP) (2003), pp. 164–177.&lt;br /&gt;
&lt;br /&gt;
[12] LARUS, J., AND PARKES, M. Using Cohort-Scheduling to Enhance Server Performance. In Proceedings of the annual conference on USENIX Annual Technical Conference (ATEC) (2002), pp. 103–114.&lt;br /&gt;
&lt;br /&gt;
[13] ARON, M., AND DRUSCHEL, P. Soft timers: efficient microsecond software timer support for network processing. ACM Trans. Comput. Syst. (TOCS) 18, 3 (2000), 197–228.&lt;br /&gt;
&lt;br /&gt;
[14] DRUSCHEL, P., AND BANGA, G. Lazy receiver processing (LRP): a network subsystem architecture for server systems. In Proceedings of the 2nd USENIX Symposium on Operating Systems Design and Implementation (OSDI) (1996), pp. 261–275.&lt;br /&gt;
&lt;br /&gt;
[15] CHAKRABORTY, K., WELLS, P. M., AND SOHI, G. S. Computation Spreading: Employing Hardware Migration to Specialize CMP Cores On-the-fly. In Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) (2006), pp. 283–292.&lt;br /&gt;
&lt;br /&gt;
[16] Vasudevan, Vijay. &amp;lt;i&amp;gt;Improving Datacenter Energy Efficiency Using a Fast Array of Wimpy Nodes&amp;lt;/i&amp;gt;, Thesis Proposal, Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, October 12, 2010.[http://www.cs.cmu.edu/~vrv/proposal/vijay_thesis_proposal.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[17] Patricia J. Teller &amp;lt;i&amp;gt;Translation-Lookaside Buffer Consistency&amp;lt;/i&amp;gt;, Journal Volume 23 Issue 6, IBM T. J. Watson Research Center, Yorktown Heights, NY, June 1990. [http://dx.doi.org/10.1109/2.55498 HTML]&lt;br /&gt;
&lt;br /&gt;
[18] Linux ABI sourceforge page. [http://linux-abi.sourceforge.net/ HTML] and Linux application page. [http://www.linux.org/apps/AppId_8088.html HTML]&lt;br /&gt;
&lt;br /&gt;
[19] DREPPER, U., AND MOLNAR , I. &amp;lt;i&amp;gt;The Native POSIX Thread Library for Linux&amp;lt;/i&amp;gt;. Tech. rep., RedHat Inc, 2003. [http://people.redhat.com/drepper/nptl-design.pdf HTML]&lt;br /&gt;
&lt;br /&gt;
[20] M. Brian Blake, &amp;lt;i&amp;gt;Coordinating Multiple Agents for Workflow-Oriented Process Orchestration&amp;lt;/i&amp;gt;. Information Systems and e-Business Management Journal, Springer-Verlag, December 2003. [http://www.cs.georgetown.edu/~blakeb/pubs/blake_ISEB2003.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[21] DeveloperWorks, Kernel Command using Linux System Calls IBM,2010.[http://www.ibm.com/developerworks/linux/library/l-system-calls/ HTML]&lt;br /&gt;
&lt;br /&gt;
[22] McCracken, Dave. &amp;lt;i&amp;gt;POSIX Threads and the Linux Kernel&amp;lt;/i&amp;gt;, IBM Linux Technology Center, Austin, TX. In Proceedings of the Ottawa Linux Symposium, 2002. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.2.8887&amp;amp;rep=rep1&amp;amp;type=pdf#page=330 PDF]&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6942</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6942"/>
		<updated>2010-12-03T10:59:25Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Implementation and Statistics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay. It is essential for one to comprehend the basic concepts of the language that is spoken in the paper in order to fully understand the ideas that are being discussed. The most important notions discussed in the FlexSC paper that are at the core of it all, are System calls[21], and Sychronous systems. These base definitons along with numerous other helpful ideas can be understood through the section of the paper to follow. &lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
In order to fully understand the FlexSC paper, it is essential to understand the key concepts that are discussed within the paper. Here listed below, are the main concepts required to fully comprehend the paper. It is more vital to the reader to understand the core ideas of these definitions, along with the underlying motivation for their existence, than to understand the miniscule details of their processes. &lt;br /&gt;
&lt;br /&gt;
===System Call===&lt;br /&gt;
A &amp;lt;b&amp;gt;System Call&amp;lt;/b&amp;gt; is the gateway between the User Space and the Kernel Space. The User Space is not given direct access to the Kernel&#039;s services, for several reasons (one being security), hence System calls are the messengers between the User and Kernel Space.[1][4]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode Switch===&lt;br /&gt;
&amp;lt;b&amp;gt;Mode Switches&amp;lt;/b&amp;gt; speak of moving from one medium to another. Specifically moving from the User Space mode to the Kernel mode or Kernel mode to User Space. It does not matter which direction or which modes we are switching from, this is simply a general term. Crucial to mode switching is the &amp;lt;b&amp;gt;mode switch time&amp;lt;/b&amp;gt; which is the time necessary to execute a system call instruction in user-mode, perform the kernel mode execution of the system call, and finally return the execution back to user-mode.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Synchronous System Call===&lt;br /&gt;
&amp;lt;b&amp;gt;Synchronous Execution Model(System call Interface)&amp;lt;/b&amp;gt; refers to the structure in which system calls specifically are managed in a serialized manner. Moreover, the synchronous model completes one system call at a time, and does not move onto the next system call until the previous system call is finished executing. This form of system call is blocking, meaning the process which initiates the system call is blocked until the system call returns. Traditionally, operating system calls are mostly synchronous system calls.[1][2]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Asynchronous System Call===&lt;br /&gt;
An &amp;lt;b&amp;gt;asynchronous system call&amp;lt;/b&amp;gt; is a system call which does not block upon invocation; control of execution is returned to the calling process immediately. Asynchronous system calls do not necessarily execute in order and can be compared to event driven programming.[2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Pollution===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Pollution&amp;lt;/b&amp;gt; is a more sophisticated manner of referring to wasteful or un-necessary delay in the system caused by system calls. This pollution is in direct correlation with the fact that the system call invokes a mode switch which is not a costless task. The &amp;quot;pollution&amp;quot; involved takes the form of data over-written in critical processor structures like the TLB (translation look-aside buffer - table which reduces the frequency of main memory access for page table entries), branch prediction tables, and the cache (L1, L2, L3).[1][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Pipeline Flushing===&lt;br /&gt;
The regular operation of a CPU has multiple instructions being fetched, decoded and executed at the same time.  The parallel processing of instructions provides a significant speed advantage in processing.  During a mode switch, however, instructions in the  user-mode pipeline are flushed and removed from the processor registers.[1]  These lost instructions are part of the cost of a mode switch.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Processor Exceptions===&lt;br /&gt;
&amp;lt;b&amp;gt;Processor exceptions&amp;lt;/b&amp;gt; are situations which cause the processor to stop current execution unexpectedly in order to handle the issue. There are many situations which generate processor exceptions including undefined instructions and software interrupts(system calls).[5]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Batching&amp;lt;/b&amp;gt; is the concept of collecting system calls together to be executed in a group instead of executing them immediately after they are called.[6]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Temporal and Spatial Locality===&lt;br /&gt;
Locality is the concept that during execution there will be a tendency for the same set of data to be accessed repeatedly over a brief time period. There are two important forms of locality; &amp;lt;b&amp;gt; spatial locality&amp;lt;/b&amp;gt; and &amp;lt;b&amp;gt;temporal locality&amp;lt;/b&amp;gt;. Spatial locality refers to the pattern that memory locations in close physical proximity will be referenced close together in a short period of time. Temporal locality, on the other hand, is the tendency of recently requested memory locations to be requested again.[7][8]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Instructions Per Cycle (IPC)===&lt;br /&gt;
&amp;lt;b&amp;gt;Instructions per cycle&amp;lt;/b&amp;gt; is the amount of instructions a processor can execute in a single clock cycle.[9]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Translation Look-Aside Buffer (TLB)===&lt;br /&gt;
A TLB is a table used in a virtual memory system that lists the physical address page number associated with each virtual address page number. A TLB is used in conjunction with a cache whose tags are based on virtual addresses. The virtual address is presented simultaneously to the TLB and to the cache so that cache access and the virtual-to-physical address translation can proceed in parallel. If the requested address is not cached then the physical address is used to locate the data in main memory. &lt;br /&gt;
&lt;br /&gt;
The TLB is the reason context switches can have such large performance penalties. Every time the OS switches context, the entire buffer is flushed. When the process resumes, it must be rebuilt from scratch. Too many context switches will therefore cause an increase in cache misses and degrade performance.[17]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Lack of Locality ===&lt;br /&gt;
As per paper, locality refers to both types of locality, i.e. temporal and spatial, defined above. Thus, lack of locality here means data and instructions needed most frequently by the application continues to be switched back and forth (from registers and caches) due to system calls, attributing hence, to performance degradation.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Throughput ===&lt;br /&gt;
Is an indication of how much work is done during a unit of time. E.g. n transactions per hour. The higher n is, the better. [2. P151]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Regular Store Instructions ===&lt;br /&gt;
A store instruction simply refers to a typical assembly language instruction, where, usually, there are two arguments. A value, and a memory location, where that value should be stored.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Linux Application Binary Interface (ABI)===&lt;br /&gt;
The ABI is a patch to the kernel that allows you to run SCO, Xenix, Solaris ix86, and other binaries on Linux.[18]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Native POSIX Thread Library (NPTL)===&lt;br /&gt;
NPTL is a software component that allows the Linux kernel to run applications optimized for POSIX Thread efficiency.[19]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Syscall Page ===&lt;br /&gt;
A syscall page is a collection of syscall entries. In turn, a sysentry is a 64-byte data structure, which includes information such as syscall number, number of arguments,&lt;br /&gt;
the arguments, status, and return value [1].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Syscall Threads ===&lt;br /&gt;
Syscall threads is FlexSC&#039;s method to allow exception-less system calls. A syscall thread shares its process virtual address space [1].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Latency ===&lt;br /&gt;
Latency is a measure of the time delay between the start of an action and its completion in a system.[20]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
System calls provide an interface for user-mode applications to request services from the operating system. Traditionally, the system call interface has been implemented using synchronous system calls, which block the calling user-space process when the system call is initiated. The benefit of using synchronous system calls comes from the easy to program nature of having sequential operation. However, this ease of use also comes with undesireable side effects which can slow down the instructions per cycle (IPC) of the processor.[9] In &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, Soares and Stumm attempt to provide a new form of system call which minimizes the negative effects of synchronous system calls while still remaining easy to implement for application programmers.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The negative effects of synchronous system calls have been researched heavily, it is accepted that although easy to use, they are not optimal. Previous research includes work into &amp;lt;b&amp;gt;system call batching&amp;lt;/b&amp;gt; such as multi-calls[6], &amp;lt;b&amp;gt;locality of execution with multicore systems&amp;lt;/b&amp;gt;[7][8], and &amp;lt;b&amp;gt;non-blocking execution&amp;lt;/b&amp;gt;. System call batching shares great similarity with FlexSC as multiple system calls are grouped together to reduce the amount of mode switches required of the system.[6] The difference is multi-calls do not make use of parallel execution of system calls nor do they manage the blocking aspect of synchronous system calls. FlexSC describes methods to handle both of these situations as described in the &amp;lt;b&amp;gt;Contribution&amp;lt;/b&amp;gt; section of this document.[1] Previous research into locality of execution and multicore systems has focused on managing device interrupts and limiting processor interference associated with interrupt handling.[7][8] However, these solutions require a microkernel solution and although they can dedicate certain execution to specific cores of a system, they can not dynamically adapt to the proportion of cores used by the kernel and the cores shared between the kernel and the user like FlexSC can.[1] Non-blocking execution research has focused on threading, event-based (non-blocking) and hybrid solutions. However, FlexSC provides a mechanism to separate system call execution from system call invocation. This is a key difference between FlexSC and previous research.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
===Exception-Less System Calls===&lt;br /&gt;
Exception-less system calls are the research team&#039;s attempt to provide an alternative to synchronous systems calls. The downside to synchronous system calls includes the cumulative mode switch time of multiple system calls each called independently, state pollution of key processor structures (TLB, cache, etc.)[1][3], and, potentially the most crucial, the performance impact on the user-mode application during a system call. Exception-less system calls attempt to resolve these three issues through:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
1. &amp;lt;u&amp;gt;System Call Batching:&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Instead of having each system call run as soon as it is called, FlexSC instead groups together system calls into batches. These batches can then be executed at one time thus minimizing the     frequency of mode switches bewteen user and kernel modes. Batching provides a benefit both in terms of the direct cost of mode switching as well as the indirect cost, pollution of critical processor structures, associated with switching modes. System call batching works by first requesting as many system calls as possible, then switching to kernel mode, and then executing each of them.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2. &amp;lt;u&amp;gt;Core Specialization&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
On a multi-core system, FlexSC can provide the ability to designate a single core to run all system calls. The reason this is possible is that for an exception-less system call, the system call execution is decoupled from the system call invocation. This is described further in &amp;lt;b&amp;gt;Decoupling Execution from Invocation&amp;lt;/b&amp;gt; section below.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
3. &amp;lt;u&amp;gt;Exception-less System Call Interface&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
To provide an asynchronous interface to the kernel, FlexSC uses &amp;lt;b&amp;gt;syscall pages&amp;lt;/b&amp;gt;. Syscall pages are a set of memory pages shared between user-mode and kernel-mode. User-space threads interact with syscall pages in order to make a request (system call) for kernel-mode procedures. A user-mode thread may make a system call request on a free entry of a syscall page, the syscall page will then run once the batch condition is met and store the return value on the syscall page. The user-mode thread can then return to the syscall page to obtain the return value. Neither issuing the system call via the syscall page nor getting the return value from the syscall page generate a processor exception. Each syscall page is a table of syscall entries. These entries may have one of three states: &amp;lt;b&amp;gt;Free&amp;lt;/b&amp;gt; - meaning a syscall can be added to the entry; &amp;lt;b&amp;gt;Submitted&amp;lt;/b&amp;gt; - meaning the kernel can proceed to invoke the appropriate system call operations; and &amp;lt;b&amp;gt;Done&amp;lt;/b&amp;gt; - meaning the kernel is finished and the return value is ready for the user-mode thread to retrieve it.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
4. &amp;lt;u&amp;gt;Decoupling Execution from Invocation&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In order to separate a system call invocation from the execution of the system call, &amp;lt;b&amp;gt;syscall threads&amp;lt;/b&amp;gt; were created. The sole purpose of syscall threads is to pull requests from syscall pages and execute the request, always in kernel mode. This is the mechanic that allows exception-less system calls to provide the ability for a user-mode thread to issue a request and continue to run while the kernel level system call is being executed. In addition, since the system call invocation is separate from execution, a process running on one core may request a system call yet the execution of the system call may be completed on an entirely different core. This allows exception-less system calls the unique capability of having all system call execution delegated to a specific core while other cores maintain user-mode execution.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===FlexSC Threads===&lt;br /&gt;
As mentioned above, FlexSC threads are a key component of the exception-less system call interface. FlexSC threads transform regular, synchronous system calls into exception-less system calls and are compatible with both the POSIX and default Linux thread libraries. This means that FlexSC Threads are immediately capable of running multi-threaded Linux applications with no modifications. The intended use of these threads is with server-type applications which contain many user-mode threads. In order to accomodate multiple user-mode threads, the FlexSC interface provides a syscall page for each core of a system. In this manner, multiple user-mode threads can be multiplexed onto a single syscall page which in turn has a single kernel level thread to facilitate execution of the system calls. Programming with FlexSC threads can be compared to event-driven programming as interactions are not guaranteed to be sequential. This does increase the complexity of programming for an exception-less system call interface as compared to the relatively simple synchronous system call interface.[1][2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
FlexSC Threads implement an M-on-N threading model, where M is the number of user based threads and N is the number of syscall threads, M being greater than N [1]. This is similar to one of the common styles of implementing threads, referred to as M:N model [22], where M represents user space threads, and N represents Kernel threads. The M-on-N model however uses FlexSC’s exception-less system call mechanism (transparently), i.e. its inner workings differ, from the otherwise typical M:N model. The M-on-N model takes advantage of maximizing the number of user space threads switching. In other words, as long as a user space thread is ready, work will continue in user space, minimizing thus the time spent on blocking.&lt;br /&gt;
&lt;br /&gt;
===Implementation and Statistics===&lt;br /&gt;
Up until this point, we have discussed and illustrated the theory behind FlexSC&#039;s logic patterns. However, it is also as important to demonstrate the practical and statistical results during implementation. FlexSC&#039;s implementation has shown a very large leap from today&#039;s synchronous system call method currently in use. &lt;br /&gt;
Firstly, demonstration of the current synchronous systems statistics will be discussed. The data collected by the researchers show that the System Calls cause a large delay in the User IPC(Interprocess Communication) between 20% and 60%. To get a general idea of how many instructions are executed in one system call, we name a few with their corresponding number of instructions. According to the researchers of this paper, pread(), pwrite(), open+close() require 3739, 5689 and 6631 instructions respectively. Therefore, these system calls, are not minuscule lines of code, that can be overlooked, they consume a large amount of time, hence FlexSC&#039;s attempt to lower this overall value by methods mentioned above in the paper such as system call batching.  &lt;br /&gt;
&lt;br /&gt;
On a single core, the System-Less Batching produces an improved time from 55 nanoseconds to about 35 nanoseconds once 15 or more batch requests have been utilized. Moreover, the throughput(requests/sec) shows a dramatic increase of 10,000 and more in Apache throughput of Linux. This throughput increases with the more cores implemented in the system. This demonstrates the theory of how the FlexSC&#039;s main application improvements in systems implementing multiple cores.   &lt;br /&gt;
&lt;br /&gt;
Furthermore, on a 4 core system, 14% of the original 50% idle time is being used in the User Space now. Latency as well is decreased by half of the original time on 1,2 and 4 core implementations. Also very interesting test case the researchers investigated was the idea of having numerous requests occurring at once in the system. They chose to execute 256 synchronous requests in the system and what they came to find is that FlexSC&#039;s method is 60ms faster than the current synchronous method and continually increases with the more cores in use.   &lt;br /&gt;
&lt;br /&gt;
Through the different methods being executed by the FlexSC&#039;s concepts, the statistics taken of the system demonstrates that their theory does hold in practice. This illustrates that we can begin implementing this system into our computers we use every day and begin to see immediate results. Now this effect of the statistics occurs when multiple cores occur as well when batching occurs numerous times. Otherwise if these two factors are very small then no change occurs or even a decrease in efficiency can occur.&lt;br /&gt;
&lt;br /&gt;
== Critique: ==&lt;br /&gt;
&lt;br /&gt;
===Moore&#039;s Law===&lt;br /&gt;
One interesting aspect of this paper is how the research relates to Moore&#039;s Law. Moore&#039;s Law states that the number of transistors on a chip doubles every 18 months.[10]. This has lead to very large increases in the performance potential of software but at the same time has opened a large gap between the actual performance of efficient and inefficient software. This paper claims that the gap is mainly caused by disparity of accessing different processor resources such as registers, cache and memory.[1] In this manner, the FlexSC interface is not just an attempt to increase the efficiency of current system calls, but it is actually an attempt to change the way we view software. It is not simply enough to continue to build more powerful machines if the code we currently run will not speed up (become more efficient) along with the gain of power. Instead we need to focus on appropriate allocation and usage of the power as failure to do so is the origination of the gap between our potential and our performance.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Performance of FlexSC===&lt;br /&gt;
It is of particular interest to note that exception-less system calls only outperformed synchronous system calls when the system was running multiple system calls. For an individual system call, the overhead of the FlexSC interface was greater than a synchronous call. The real benefit of FlexSC comes when there are many system calls which can be in turn batched before execution. In this situation the FlexSC system far outperformed the traditional synchronous system calls.[1] This is why the research paper&#039;s focus is on server-like applications as server must handle many user requests efficiently to be useful. Thus, for a general case it appears that a hybrid solution of synchronous calls below some threshold and execption-less system calls above the same threshold would be most efficient.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Blocking Calls===&lt;br /&gt;
FlexSC relies on the fact that web and database servers have a lot of concurrency and independent parallelism. FlexSC can &#039;harvest&#039; enough independent work so that it doesn&#039;t need to track dependencies between system calls. However, this could be a problem in other situations. Since FlexSC system calls are &#039;inherently asynchronous&#039;, if they need to block, FlexSC would jump to the next system call and execute that one. This can cause a problem for system calls such as reading and writing, where the write call has an outstanding dependency on the read call. However, this could be resolved by using some kind of combined system call, that is, multiple system calls executed as one single call. Unfortunately, FlexSC does not have any current handling for such an implementation.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Core Scheduling Issues===&lt;br /&gt;
In a system with X cores, FlexSC needs to dedicate some subset of cores for system calls. Currently, FlexSC first wakes up core X to run a system call thread, and when another batch comes in, if core X is still busy, it will then try core X-1, and so on. Of all the algorithms they tested, it turned out that this, the simplest algorithm, was the most efficient algorithm for FlexSC scheduling. However, this was only tested with FlexSC running a single application at a time. FlexSC&#039;s scheduling algorithm would need to be fine-tuned for running multiple applications.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When There Are Not More Threads Then Cores===&lt;br /&gt;
In situations where there is a single thread using 100% of a CPU, and acting primarily in user-space, such as &#039;Scientific Programs&#039;, FlexSC causes more overhead then performance gains. As a result, FlexSC is not an optimal implementation for cases such as this.&lt;br /&gt;
&lt;br /&gt;
===IO === &lt;br /&gt;
FlexSC is not suited for data intensive, IO centric applications, as realized by Vijay Vasudevan [16]. Vijay&#039;s research aims to reduce the energy footprint in data centers. FlexSC&amp;lt;br&amp;gt;&lt;br /&gt;
was considered. It was found that FlexSC&#039;s reduction of mode switches, via the use of shared memory pages between user space and kernel space is useful for reducing the impact&amp;lt;br&amp;gt;&lt;br /&gt;
of system calls. That technique however was not useful for IO intensive work since it did not remove the requirement of data copying and did not reduce the overheads associated&amp;lt;br&amp;gt;&lt;br /&gt;
with interrupts in IO intensive tasks.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Some Kernel Changes Are Required===&lt;br /&gt;
Though most of the work is done transparently. i.e. there is no need for application&#039;s code modification, there remains a need for small kernel change (3 lines of code), as per section 3.2 of the paper [1].&amp;lt;br&amp;gt;&lt;br /&gt;
That means adopters, and after each update of the kernel, would have to add/modify the referenced lines and then recompile the kernel.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Multicore Systems ===&lt;br /&gt;
For a multicore system, the FlexSC scheduler will attempt to choose a subset of the available cores and specialize them for running system call threads. It is unclear how the dynamic&amp;lt;br&amp;gt;&lt;br /&gt;
allocation is done. It is mentioned that decisions are made based on the workload requirements, which doesn&#039;t exactly clarify the mechanism.&amp;lt;br&amp;gt;&lt;br /&gt;
Further, the paper mentions that a predefined, static list of cores is used for system call threads assignments. It is unclear when that list is created. Is it at installation time,&amp;lt;br&amp;gt;&lt;br /&gt;
is it generated initially, or does the installer have to do any manual work. On a related note, scalability with increased cores is ambiguous. It is not that clear how scalable the&amp;lt;br&amp;gt;&lt;br /&gt;
scheduler is. One gets the impression that it is very scalable due to the fact that each core spawns a system call thread. Thus, as many threads as there are cores could be running&amp;lt;br&amp;gt;&lt;br /&gt;
concurrently, for one or more processes [1]. More explicit results however would&#039;ve been beneficial. Further, the paper mentions that hyper-threading was turned off to ease the analysis&amp;lt;br&amp;gt;&lt;br /&gt;
of the results. Understandable, however, it would be nice to know if these threads (2 per core) would actually be treated as a core when turned on ? I.e. would the scheduler then realize&amp;lt;br&amp;gt;&lt;br /&gt;
that it can use eight cores ? Does that also mean the predefined static cores list would need to be modified, to list eight instead of four ?&amp;lt;br&amp;gt;  &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
Along the same reasoning, and given the growing popularity of GPU&#039;s use for general programming, it would&#039;ve been useful to at-least hypothesize on the possible performance&amp;lt;br&amp;gt;&lt;br /&gt;
outcome when using specialized GPUs, like NVIDIA&#039;s Tesla GPUs for example. Would FlexSC&#039;s scheduler be able to take advantage of the additional cores, and hence use them for&amp;lt;br&amp;gt;&lt;br /&gt;
specialized purposes ?&lt;br /&gt;
&lt;br /&gt;
== Related Work: ==&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&lt;br /&gt;
Muti-calls is a concept which involves collecting multiple system calls and submitting them as a single system call. It is used both in operating systems and paravirtualized hypervisors. The Cassyopia compiler has a special technique name a looped multi-call, which is an additional process where the result of one system call can be fed as an argument to another system call in the same multi-call.[11] There is a significant difference between multi-calls and exception-less system calls. Multi-calls do not investigate parallel execution of system calls, nor do they address the blocking of system calls like exception-less system calls. Multi-call system calls are executed sequentially, each one must complete before the next may start. On the other hand, exception-less system calls can be executed in parallel, and in the presence of blocking, the next call can execute immediately.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Locality of Execution and Multicores===&lt;br /&gt;
&lt;br /&gt;
Several techniques addressed the issue of locality of execution. Larus and Parkes proposed Cohort Scheduling to efficiently execute staged computations.[12] Other techniques include Soft Timers[13] and Lazy Receiver[14] Processing which try to tackle the issue of locality of execution by handling device interrupts. They both try to limit processor interference associated with interrupt handling without affecting the latency of servicing requests. Another technique name Computation Spreading[15] is most similar to the multicore execution of FlexSC. Processor modifications that allow hardware migration of threads and migration to specialized cores. However, they did not model TLBs on current hardware synchronous thread migration is a costly interprocessor interrupt. Another solution has 2 difference between FlexSC. They require a micro-kernel. Also FlexSC can dynamically adapt the proportion of cores used by the kernel or cores shared by user and kernel execution dynamically. While all these solutions rely on expensive inter-processor interrupts to offload system calls, FlexSC could provide a more efficient, and flexible mechanism.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Non-blocking Execution===&lt;br /&gt;
&lt;br /&gt;
Past research on improving system call performance has focused extensively on blocking versus non-blocking behavior. Typically researchers used threading, event-based, which is non-blocking and hybrid systems to obtain high performance on server applications. The main difference between many of the proposals for non-blocking execution and FlexSC is that none of the non-blocking system calls have decoupled the system call invocation from its execution.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References: ==&lt;br /&gt;
[1] Soares, Livio and Michael Stumm, &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, University of Toronto, 2010.[http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Tanenbaum, Andrew S., &amp;lt;i&amp;gt;Modern Operating Systems: 3rd Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2008.&lt;br /&gt;
&lt;br /&gt;
[3] Stallings, William, &amp;lt;i&amp;gt;Operating Systems: Internals and Design Principles - 6th Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2009.&lt;br /&gt;
&lt;br /&gt;
[4] Garfinkel, Tim, &amp;lt;i&amp;gt;Traps and Pitfalls: Practical Problems in System Call Interposition Based Security Tools&amp;lt;/i&amp;gt;, Computer Science Department - Stanford University.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.144.2695&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[5] Yoo, Sunjoo &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Automatic Generation of Fast Timed Simulation Models for Operating Systems in SoC Design&amp;lt;/i&amp;gt;, SLS Group, TIMA Laboratory, Grenoble, 2002.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.1148&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[6] Rajagopalan, Mohan &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Cassyopia: Compiler Assisted System Optimization&amp;lt;/i&amp;gt;, Poceedings of HotOS IX: The 9th Workshop on Hot Topics in Operating Systems, Lihue, Hawaii, 2003.[https://www.usenix.org/events/hotos03/tech/full_papers/rajagopalan/rajagopalan.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[7] Kumar, Sanjeev and Christopher Wilkerson, &amp;lt;i&amp;gt;Exploiting Spatial Locality in Data Caches using Spatial Footprints&amp;lt;/i&amp;gt;, Princeton University and Microcomputer Research Labs (Oregon), 1998.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.1550&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[8] Jin, Shudong and Azer Bestavros, &amp;lt;i&amp;gt;Sources and Characteristics of Web Temporal Locality&amp;lt;/i&amp;gt;, Computer Science Depratment - Boston University, Boston. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.94.5941&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[9] Agarwal, Vikas &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Clock Rate versus IPS: The End of the Road for Conventional Microarhitechtures&amp;lt;/i&amp;gt;, University of Texas, Austin, 2000.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.3694&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[10] Tuomi, Ilkka, &amp;lt;i&amp;gt;The Lives and Death of Moore&#039;s Law&amp;lt;/i&amp;gt;, 2002.[http://131.193.153.231/www/issues/issue7_11/tuomi/ HTML]&lt;br /&gt;
&lt;br /&gt;
[11] BARHAM, P., DRAGOVIC, B., FRASER, K., HAND, S., HARRIS, T., HO, A., NEUGEBAUER, R., PRATT, I., AND WARFIELD, A. Xen and the art of virtualization. In Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP) (2003), pp. 164–177.&lt;br /&gt;
&lt;br /&gt;
[12] LARUS, J., AND PARKES, M. Using Cohort-Scheduling to Enhance Server Performance. In Proceedings of the annual conference on USENIX Annual Technical Conference (ATEC) (2002), pp. 103–114.&lt;br /&gt;
&lt;br /&gt;
[13] ARON, M., AND DRUSCHEL, P. Soft timers: efficient microsecond software timer support for network processing. ACM Trans. Comput. Syst. (TOCS) 18, 3 (2000), 197–228.&lt;br /&gt;
&lt;br /&gt;
[14] DRUSCHEL, P., AND BANGA, G. Lazy receiver processing (LRP): a network subsystem architecture for server systems. In Proceedings of the 2nd USENIX Symposium on Operating Systems Design and Implementation (OSDI) (1996), pp. 261–275.&lt;br /&gt;
&lt;br /&gt;
[15] CHAKRABORTY, K., WELLS, P. M., AND SOHI, G. S. Computation Spreading: Employing Hardware Migration to Specialize CMP Cores On-the-fly. In Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) (2006), pp. 283–292.&lt;br /&gt;
&lt;br /&gt;
[16] Vasudevan, Vijay. &amp;lt;i&amp;gt;Improving Datacenter Energy Efficiency Using a Fast Array of Wimpy Nodes&amp;lt;/i&amp;gt;, Thesis Proposal, Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, October 12, 2010.[http://www.cs.cmu.edu/~vrv/proposal/vijay_thesis_proposal.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[17] Patricia J. Teller &amp;lt;i&amp;gt;Translation-Lookaside Buffer Consistency&amp;lt;/i&amp;gt;, Journal Volume 23 Issue 6, IBM T. J. Watson Research Center, Yorktown Heights, NY, June 1990. [http://dx.doi.org/10.1109/2.55498 HTML]&lt;br /&gt;
&lt;br /&gt;
[18] Linux ABI sourceforge page. [http://linux-abi.sourceforge.net/ HTML] and Linux application page. [http://www.linux.org/apps/AppId_8088.html HTML]&lt;br /&gt;
&lt;br /&gt;
[19] DREPPER, U., AND MOLNAR , I. &amp;lt;i&amp;gt;The Native POSIX Thread Library for Linux&amp;lt;/i&amp;gt;. Tech. rep., RedHat Inc, 2003. [http://people.redhat.com/drepper/nptl-design.pdf HTML]&lt;br /&gt;
&lt;br /&gt;
[20] M. Brian Blake, &amp;lt;i&amp;gt;Coordinating Multiple Agents for Workflow-Oriented Process Orchestration&amp;lt;/i&amp;gt;. Information Systems and e-Business Management Journal, Springer-Verlag, December 2003. [http://www.cs.georgetown.edu/~blakeb/pubs/blake_ISEB2003.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[21] DeveloperWorks, Kernel Command using Linux System Calls IBM,2010.[http://www.ibm.com/developerworks/linux/library/l-system-calls/ HTML]&lt;br /&gt;
&lt;br /&gt;
[22] McCracken, Dave. &amp;lt;i&amp;gt;POSIX Threads and the Linux Kernel&amp;lt;/i&amp;gt;, IBM Linux Technology Center, Austin, TX. In Proceedings of the Ottawa Linux Symposium, 2002. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.2.8887&amp;amp;rep=rep1&amp;amp;type=pdf#page=330 PDF]&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6515</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6515"/>
		<updated>2010-12-02T21:24:50Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Implementation and Statistics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay. It is essential for one to comprehend the basic concepts of the language that is spoken in the paper in order to fully understand the ideas that are being discussed. The most important notions discussed in the FlexSC paper that are at the core of it all, are System calls[21], and Sychronous systems. These base definitons along with numerous other helpful ideas can be understood through the section of the paper to follow. &lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
In order to fully understand the FlexSC paper, it is essential to understand the key concepts that are discussed within the paper. Here listed below, are the main concepts required to fully comprehend the paper. It is more vital to the reader to understand the core ideas of these definitions, along with the underlying motivation for their existence, than to understand the miniscule details of their processes. &lt;br /&gt;
&lt;br /&gt;
===System Call===&lt;br /&gt;
A &amp;lt;b&amp;gt;System Call&amp;lt;/b&amp;gt; is the gateway between the User Space and the Kernel Space. The User Space is not given direct access to the Kernel&#039;s services, for several reasons (one being security), hence System calls are the messengers between the User and Kernel Space.[1][4]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode Switch===&lt;br /&gt;
&amp;lt;b&amp;gt;Mode Switches&amp;lt;/b&amp;gt; speak of moving from one medium to another. Specifically moving from the User Space mode to the Kernel mode or Kernel mode to User Space. It does not matter which direction or which modes we are switching from, this is simply a general term. Crucial to mode switching is the &amp;lt;b&amp;gt;mode switch time&amp;lt;/b&amp;gt; which is the time necessary to execute a system call instruction in user-mode, perform the kernel mode execution of the system call, and finally return the execution back to user-mode.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Synchronous System Call===&lt;br /&gt;
&amp;lt;b&amp;gt;Synchronous Execution Model(System call Interface)&amp;lt;/b&amp;gt; refers to the structure in which system calls specifically are managed in a serialized manner. Moreover, the synchronous model completes one system call at a time, and does not move onto the next system call until the previous system call is finished executing. This form of system call is blocking, meaning the process which initiates the system call is blocked until the system call returns. Traditionally, operating system calls are mostly synchronous system calls.[1][2]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Asynchronous System Call===&lt;br /&gt;
An &amp;lt;b&amp;gt;asynchronous system call&amp;lt;/b&amp;gt; is a system call which does not block upon invocation; control of execution is returned to the calling process immediately. Asynchronous system calls do not necessarily execute in order and can be compared to event driven programming.[2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Pollution===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Pollution&amp;lt;/b&amp;gt; is a more sophisticated manner of referring to wasteful or un-necessary delay in the system caused by system calls. This pollution is in direct correlation with the fact that the system call invokes a mode switch which is not a costless task. The &amp;quot;pollution&amp;quot; involved takes the form of data over-written in critical processor structures like the TLB (translation look-aside buffer - table which reduces the frequency of main memory access for page table entries), branch prediction tables, and the cache (L1, L2, L3).[1][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Pipeline Flushing===&lt;br /&gt;
The regular operation of a CPU has multiple instructions being fetched, decoded and executed at the same time.  The parallel processing of instructions provides a significant speed advantage in processing.  During a mode switch, however, instructions in the  user-mode pipeline are flushed and removed from the processor registers.[1]  These lost instructions are part of the cost of a mode switch.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Processor Exceptions===&lt;br /&gt;
&amp;lt;b&amp;gt;Processor exceptions&amp;lt;/b&amp;gt; are situations which cause the processor to stop current execution unexpectedly in order to handle the issue. There are many situations which generate processor exceptions including undefined instructions and software interrupts(system calls).[5]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Batching&amp;lt;/b&amp;gt; is the concept of collecting system calls together to be executed in a group instead of executing them immediately after they are called.[6]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Temporal and Spatial Locality===&lt;br /&gt;
Locality is the concept that during execution there will be a tendency for the same set of data to be accessed repeatedly over a brief time period. There are two important forms of locality; &amp;lt;b&amp;gt; spatial locality&amp;lt;/b&amp;gt; and &amp;lt;b&amp;gt;temporal locality&amp;lt;/b&amp;gt;. Spatial locality refers to the pattern that memory locations in close physical proximity will be referenced close together in a short period of time. Temporal locality, on the other hand, is the tendency of recently requested memory locations to be requested again.[7][8]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Instructions Per Cycle (IPC)===&lt;br /&gt;
&amp;lt;b&amp;gt;Instructions per cycle&amp;lt;/b&amp;gt; is the amount of instructions a processor can execute in a single clock cycle.[9]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Translation Look-Aside Buffer (TLB)===&lt;br /&gt;
A TLB is a table used in a virtual memory system that lists the physical address page number associated with each virtual address page number. A TLB is used in conjunction with a cache whose tags are based on virtual addresses. The virtual address is presented simultaneously to the TLB and to the cache so that cache access and the virtual-to-physical address translation can proceed in parallel. If the requested address is not cached then the physical address is used to locate the data in main memory. &lt;br /&gt;
&lt;br /&gt;
The TLB is the reason context switches can have such large performance penalties. Every time the OS switches context, the entire buffer is flushed. When the process resumes, it must be rebuilt from scratch. Too many context switches will therefore cause an increase in cache misses and degrade performance.[17]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Lack of Locality ===&lt;br /&gt;
As per paper, locality refers to both types of locality, i.e. temporal and spatial, defined above. Thus, lack of locality here means data and instructions needed most frequently by the application continues to be switched back and forth (from registers and caches) due to system calls, attributing hence, to performance degradation.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Throughput ===&lt;br /&gt;
Is an indication of how much work is done during a unit of time. E.g. n transactions per hour. The higher n is, the better. [2. P151]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Regular Store Instructions ===&lt;br /&gt;
A store instruction simply refers to a typical assembly language instruction, where, usually, there are two arguments. A value, and a memory location, where that value should be stored.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Linux Application Binary Interface (ABI)===&lt;br /&gt;
The ABI is a patch to the kernel that allows you to run SCO, Xenix, Solaris ix86, and other binaries on Linux.[18]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Native POSIX Thread Library (NPTL)===&lt;br /&gt;
NPTL is a software component that allows the Linux kernel to run applications optimized for POSIX Thread efficiency.[19]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Syscall Page ===&lt;br /&gt;
A syscall page is a collection of syscall entries. In turn, a sysentry is a 64-byte data structure, which includes information such as syscall number, number of arguments,&lt;br /&gt;
the arguments, status, and return value [1].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Syscall Threads ===&lt;br /&gt;
Syscall threads is FlexSC&#039;s method to allow exception-less system calls. A syscall thread shares its process virtual address space [1].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Latency ===&lt;br /&gt;
Latency is a measure of the time delay between the start of an action and its completion in a system.[20]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
System calls provide an interface for user-mode applications to request services from the operating system. Traditionally, the system call interface has been implemented using synchronous system calls, which block the calling user-space process when the system call is initiated. The benefit of using synchronous system calls comes from the easy to program nature of having sequential operation. However, this ease of use also comes with undesireable side effects which can slow down the instructions per cycle (IPC) of the processor.[9] In &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, Soares and Stumm attempt to provide a new form of system call which minimizes the negative effects of synchronous system calls while still remaining easy to implement for application programmers.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The negative effects of synchronous system calls have been researched heavily, it is accepted that although easy to use, they are not optimal. Previous research includes work into &amp;lt;b&amp;gt;system call batching&amp;lt;/b&amp;gt; such as multi-calls[6], &amp;lt;b&amp;gt;locality of execution with multicore systems&amp;lt;/b&amp;gt;[7][8], and &amp;lt;b&amp;gt;non-blocking execution&amp;lt;/b&amp;gt;. System call batching shares great similarity with FlexSC as multiple system calls are grouped together to reduce the amount of mode switches required of the system.[6] The difference is multi-calls do not make use of parallel execution of system calls nor do they manage the blocking aspect of synchronous system calls. FlexSC describes methods to handle both of these situations as described in the &amp;lt;b&amp;gt;Contribution&amp;lt;/b&amp;gt; section of this document.[1] Previous research into locality of execution and multicore systems has focused on managing device interrupts and limiting processor interference associated with interrupt handling.[7][8] However, these solutions require a microkernel solution and although they can dedicate certain execution to specific cores of a system, they can not dynamically adapt to the proportion of cores used by the kernel and the cores shared between the kernel and the user like FlexSC can.[1] Non-blocking execution research has focused on threading, event-based (non-blocking) and hybrid solutions. However, FlexSC provides a mechanism to separate system call execution from system call invocation. This is a key difference between FlexSC and previous research.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
===Exception-Less System Calls===&lt;br /&gt;
Exception-less system calls are the research team&#039;s attempt to provide an alternative to synchronous systems calls. The downside to synchronous system calls includes the cumulative mode switch time of multiple system calls each called independently, state pollution of key processor structures (TLB, cache, etc.)[1][3], and, potentially the most crucial, the performance impact on the user-mode application during a system call. Exception-less system calls attempt to resolve these three issues through:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
1. &amp;lt;u&amp;gt;System Call Batching:&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Instead of having each system call run as soon as it is called, FlexSC instead groups together system calls into batches. These batches can then be executed at one time thus minimizing the     frequency of mode switches bewteen user and kernel modes. Batching provides a benefit both in terms of the direct cost of mode switching as well as the indirect cost, pollution of critical processor structures, associated with switching modes. System call batching works by first requesting as many system calls as possible, then switching to kernel mode, and then executing each of them.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2. &amp;lt;u&amp;gt;Core Specialization&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
On a multi-core system, FlexSC can provide the ability to designate a single core to run all system calls. The reason this is possible is that for an exception-less system call, the system call execution is decoupled from the system call invocation. This is described further in &amp;lt;b&amp;gt;Decoupling Execution from Invocation&amp;lt;/b&amp;gt; section below.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
3. &amp;lt;u&amp;gt;Exception-less System Call Interface&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
To provide an asynchronous interface to the kernel, FlexSC uses &amp;lt;b&amp;gt;syscall pages&amp;lt;/b&amp;gt;. Syscall pages are a set of memory pages shared between user-mode and kernel-mode. User-space threads interact with syscall pages in order to make a request (system call) for kernel-mode procedures. A user-mode thread may make a system call request on a free entry of a syscall page, the syscall page will then run once the batch condition is met and store the return value on the syscall page. The user-mode thread can then return to the syscall page to obtain the return value. Neither issuing the system call via the syscall page nor getting the return value from the syscall page generate a processor exception. Each syscall page is a table of syscall entries. These entries may have one of three states: &amp;lt;b&amp;gt;Free&amp;lt;/b&amp;gt; - meaning a syscall can be added to the entry; &amp;lt;b&amp;gt;Submitted&amp;lt;/b&amp;gt; - meaning the kernel can proceed to invoke the appropriate system call operations; and &amp;lt;b&amp;gt;Done&amp;lt;/b&amp;gt; - meaning the kernel is finished and the return value is ready for the user-mode thread to retrieve it.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
4. &amp;lt;u&amp;gt;Decoupling Execution from Invocation&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In order to separate a system call invocation from the execution of the system call, &amp;lt;b&amp;gt;syscall threads&amp;lt;/b&amp;gt; were created. The sole purpose of syscall threads is to pull requests from syscall pages and execute the request, always in kernel mode. This is the mechanic that allows exception-less system calls to provide the ability for a user-mode thread to issue a request and continue to run while the kernel level system call is being executed. In addition, since the system call invocation is separate from execution, a process running on one core may request a system call yet the execution of the system call may be completed on an entirely different core. This allows exception-less system calls the unique capability of having all system call execution delegated to a specific core while other cores maintain user-mode execution.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===FlexSC Threads===&lt;br /&gt;
As mentioned above, FlexSC threads are a key component of the exception-less system call interface. FlexSC threads transform regular, synchronous system calls into exception-less system calls and are compatible with both the POSIX and default Linux thread libraries. This means that FlexSC Threads are immediately capable of running multi-threaded Linux applications with no modifications. The intended use of these threads is with server-type applications which contain many user-mode threads. In order to accomodate multiple user-mode threads, the FlexSC interface provides a syscall page for each core of a system. In this manner, multiple user-mode threads can be multiplexed onto a single syscall page which in turn has a single kernel level thread to facilitate execution of the system calls. Programming with FlexSC threads can be compared to event-driven programming as interactions are not guaranteed to be sequential. This does increase the complexity of programming for an exception-less system call interface as compared to the relatively simple synchronous system call interface.[1][2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
FlexSC Threads implement an M-on-N threading model, where M is the number of user based threads and N is the number of syscall threads, M being greater than N [1]. This is similar to one of the common styles of implementing threads, referred to as M:N model [22], where M represents user space threads, and N represents Kernel threads. The M-on-N model however uses FlexSC’s exception-less system call mechanism (transparently), i.e. its inner workings differ, from the otherwise typical M:N model. The M-on-N model takes advantage of maximizing the number of user space threads switching. In other words, as long as a user space thread is ready, work will continue in user space, minimizing thus the time spent on blocking.&lt;br /&gt;
&lt;br /&gt;
===Implementation and Statistics===&lt;br /&gt;
Up until this point, we have discussed and illustrated the theory behind FlexSC&#039;s logic patterns. However, it is also as important to demonstrate the practical and statistical results during implementation. FlexSC&#039;s implementation has shown a very large leap from todays synchronous system call method currently in use. &lt;br /&gt;
Firstly, demonstration of the current synchronous systems statistics will be discussed. The data collected by the researchers show that the System Calls cause a large delay in the User IPC(Interprocess Communication) between 20% and 60%. To get a general idea of how many instructions are executed in one system call here we name a few with their corresponding number of instructions. According to the researchers of this paper, pread(),pwrite(), open+close() require 3739, 5689 and 6631 instructions respectively. Therefore, these system calls, are not minuscule lines of code, that can be overlooked, they take a large amount of Time devoted to them, hence FlexSC attempts to lower this overall value by methods mentioned above in the paper such as system call batching.  &lt;br /&gt;
&lt;br /&gt;
On a single core, the System-Less Batching produces an improved time from 55 nanoseconds to about 35 nanoseconds once 15 or more batch requests have been utilized. Moreover, the throughput(requests/sec) shows a dramatic increase of 10,000 and more in Apache throughput of Linux. This throughput increases with the more cores implemented in the system. This demonstrates our theory of how the FlexSC&#039;s main application improvements in systems implementing multiple cores.   &lt;br /&gt;
&lt;br /&gt;
Furthermore, on a 4 core system, 14% of the original 50% idle time is being used in the User Space now. Hence, where the system was once spending time doing nothing, it is now being put to get use and running the User Space with more aid. Latency as well is decreased by half of the original time on 1,2 and 4 core implementations. Also very interesting test case the researchers investigated was the idea of having numerous requests occurring at once in the system. They chose to execute 256 synchronous requests in the system and what they came to find is that FlexSC&#039;s method is 60ms faster than the current synchronous method and continually increases with the more cores in use.   &lt;br /&gt;
&lt;br /&gt;
Through the different methods being executed by the FlexSC&#039;s concepts, the statistics taken of the system demonstrates that their theory does hold in practice. This illustrates that we can being implementing this system into our computers we use every day and begin to see results in front of our eyes. ***i&#039;m not done writing this section, will be back to work on it in an hour, still need some fixing here and there i know.&lt;br /&gt;
&lt;br /&gt;
== Critique: ==&lt;br /&gt;
&lt;br /&gt;
===Moore&#039;s Law===&lt;br /&gt;
One interesting aspect of this paper is how the research relates to Moore&#039;s Law. Moore&#039;s Law states that the number of transistors on a chip doubles every 18 months.[10]. This has lead to very large increases in the performance potential of software but at the same time has opened a large gap between the actual performance of efficient and inefficient software. This paper claims that the gap is mainly caused by disparity of accessing different processor resources such as registers, cache and memory.[1] In this manner, the FlexSC interface is not just an attempt to increase the efficiency of current system calls, but it is actually an attempt to change the way we view software. It is not simply enough to continue to build more powerful machines if the code we currently run will not speed up (become more efficient) along with the gain of power. Instead we need to focus on appropriate allocation and usage of the power as failure to do so is the origination of the gap between our potential and our performance.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Performance of FlexSC===&lt;br /&gt;
It is of particular interest to note that exception-less system calls only outperformed synchronous system calls when the system was running multiple system calls. For an individual system call, the overhead of the FlexSC interface was greater than a synchronous call. The real benefit of FlexSC comes when there are many system calls which can be in turn batched before execution. In this situation the FlexSC system far outperformed the traditional synchronous system calls.[1] This is why the research paper&#039;s focus is on server-like applications as server must handle many user requests efficiently to be useful. Thus, for a general case it appears that a hybrid solution of synchronous calls below some threshold and execption-less system calls above the same threshold would be most efficient.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Blocking Calls===&lt;br /&gt;
FlexSC relies on the fact that web and database servers have a lot of concurrency and independent parallelism. FlexSC can &#039;harvest&#039; enough independent work so that it doesn&#039;t need to track dependencies between system calls. However, this could be a problem in other situations. Since FlexSC system calls are &#039;inherently asynchronous&#039;, if they need to block, FlexSC would jump to the next system call and execute that one. This can cause a problem for system calls such as reading and writing, where the write call has an outstanding dependency on the read call. However, this could be resolved by using some kind of combined system call, that is, multiple system calls executed as one single call. Unfortunately, FlexSC does not have any current handling for such an implementation.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Core Scheduling Issues===&lt;br /&gt;
In a system with X cores, FlexSC needs to dedicate some subset of cores for system calls. Currently, FlexSC first wakes up core X to run a system call thread, and when another batch comes in, if core X is still busy, it will then try core X-1, and so on. Of all the algorithms they tested, it turned out that this, the simplest algorithm, was the most efficient algorithm for FlexSC scheduling. However, this was only tested with FlexSC running a single application at a time. FlexSC&#039;s scheduling algorithm would need to be fine-tuned for running multiple applications.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When There Are Not More Threads Then Cores===&lt;br /&gt;
In situations where there is a single thread using 100% of a CPU, and acting primarily in user-space, such as &#039;Scientific Programs&#039;, FlexSC causes more overhead then performance gains. As a result, FlexSC is not an optimal implementation for cases such as this.&lt;br /&gt;
&lt;br /&gt;
===IO === &lt;br /&gt;
FlexSC is not suited for data intensive, IO centric applications, as realized by Vijay Vasudevan [16]. Vijay&#039;s research aims to reduce the energy footprint in data centers. FlexSC&amp;lt;br&amp;gt;&lt;br /&gt;
was considered. It was found that FlexSC&#039;s reduction of mode switches, via the use of shared memory pages between user space and kernel space is useful for reducing the impact&amp;lt;br&amp;gt;&lt;br /&gt;
of system calls. That technique however was not useful for IO intensive work since it did not remove the requirement of data copying and did not reduce the overheads associated&amp;lt;br&amp;gt;&lt;br /&gt;
with interrupts in IO intensive tasks.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Some Kernel Changes Are Required===&lt;br /&gt;
Though most of the work is done transparently. i.e. there is no need for application&#039;s code modification, there remains a need for small kernel change (3 lines of code), as per section 3.2 of the paper [1].&amp;lt;br&amp;gt;&lt;br /&gt;
That means adopters, and after each update of the kernel, would have to add/modify the referenced lines and then recompile the kernel.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Multicore Systems ===&lt;br /&gt;
For a multicore system, the FlexSC scheduler will attempt to choose a subset of the available cores and specialize them for running system call threads. It is unclear how the dynamic&amp;lt;br&amp;gt;&lt;br /&gt;
allocation is done. It is mentioned that decisions are made based on the workload requirements, which doesn&#039;t exactly clarify the mechanism.&amp;lt;br&amp;gt;&lt;br /&gt;
Further, the paper mentions that a predefined, static list of cores is used for system call threads assignments. It is unclear when that list is created. Is it at installation time,&amp;lt;br&amp;gt;&lt;br /&gt;
is it generated initially, or does the installer have to do any manual work. On a related note, scalability with increased cores is ambiguous. It is not that clear how scalable the&amp;lt;br&amp;gt;&lt;br /&gt;
scheduler is. One gets the impression that it is very scalable due to the fact that each core spawns a system call thread. Thus, as many threads as there are cores could be running&amp;lt;br&amp;gt;&lt;br /&gt;
concurrently, for one or more processes [1]. More explicit results however would&#039;ve been beneficial. Further, the paper mentions that hyper-threading was turned off to ease the analysis&amp;lt;br&amp;gt;&lt;br /&gt;
of the results. Understandable, however, it would be nice to know if these threads (2 per core) would actually be treated as a core when turned on ? I.e. would the scheduler then realize&amp;lt;br&amp;gt;&lt;br /&gt;
that it can use eight cores ? Does that also mean the predefined static cores list would need to be modified, to list eight instead of four ?&amp;lt;br&amp;gt;  &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
Along the same reasoning, and given the growing popularity of GPU&#039;s use for general programming, it would&#039;ve been useful to at-least hypothesize on the possible performance&amp;lt;br&amp;gt;&lt;br /&gt;
outcome when using specialized GPUs, like NVIDIA&#039;s Tesla GPUs for example. Would FlexSC&#039;s scheduler be able to take advantage of the additional cores, and hence use them for&amp;lt;br&amp;gt;&lt;br /&gt;
specialized purposes ?&lt;br /&gt;
&lt;br /&gt;
== Related Work: ==&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&lt;br /&gt;
Muti-calls is a concept which involves collecting multiple system calls and submitting them as a single system call. It is used both in operating systems and paravirtualized hypervisors. The Cassyopia compiler has a special technique name a looped multi-call, which is an additional process where the result of one system call can be fed as an argument to another system call in the same multi-call.[11] There is a significant difference between multi-calls and exception-less system calls. Multi-calls do not investigate parallel execution of system calls, nor do they address the blocking of system calls like exception-less system calls. Multi-call system calls are executed sequentially, each one must complete before the next may start. On the other hand, exception-less system calls can be executed in parallel, and in the presence of blocking, the next call can execute immediately.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Locality of Execution and Multicores===&lt;br /&gt;
&lt;br /&gt;
Several techniques addressed the issue of locality of execution. Larus and Parkes proposed Cohort Scheduling to efficiently execute staged computations.[12] Other techniques include Soft Timers[13] and Lazy Receiver[14] Processing which try to tackle the issue of locality of execution by handling device interrupts. They both try to limit processor interference associated with interrupt handling without affecting the latency of servicing requests. Another technique name Computation Spreading[15] is most similar to the multicore execution of FlexSC. Processor modifications that allow hardware migration of threads and migration to specialized cores. However, they did not model TLBs on current hardware synchronous thread migration is a costly interprocessor interrupt. Another solution has 2 difference between FlexSC. They require a micro-kernel. Also FlexSC can dynamically adapt the proportion of cores used by the kernel or cores shared by user and kernel execution dynamically. While all these solutions rely on expensive inter-processor interrupts to offload system calls, FlexSC could provide a more efficient, and flexible mechanism.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Non-blocking Execution===&lt;br /&gt;
&lt;br /&gt;
Past research on improving system call performance has focused extensively on blocking versus non-blocking behavior. Typically researchers used threading, event-based, which is non-blocking and hybrid systems to obtain high performance on server applications. The main difference between many of the proposals for non-blocking execution and FlexSC is that none of the non-blocking system calls have decoupled the system call invocation from its execution.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References: ==&lt;br /&gt;
[1] Soares, Livio and Michael Stumm, &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, University of Toronto, 2010.[http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Tanenbaum, Andrew S., &amp;lt;i&amp;gt;Modern Operating Systems: 3rd Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2008.&lt;br /&gt;
&lt;br /&gt;
[3] Stallings, William, &amp;lt;i&amp;gt;Operating Systems: Internals and Design Principles - 6th Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2009.&lt;br /&gt;
&lt;br /&gt;
[4] Garfinkel, Tim, &amp;lt;i&amp;gt;Traps and Pitfalls: Practical Problems in System Call Interposition Based Security Tools&amp;lt;/i&amp;gt;, Computer Science Department - Stanford University.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.144.2695&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[5] Yoo, Sunjoo &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Automatic Generation of Fast Timed Simulation Models for Operating Systems in SoC Design&amp;lt;/i&amp;gt;, SLS Group, TIMA Laboratory, Grenoble, 2002.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.1148&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[6] Rajagopalan, Mohan &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Cassyopia: Compiler Assisted System Optimization&amp;lt;/i&amp;gt;, Poceedings of HotOS IX: The 9th Workshop on Hot Topics in Operating Systems, Lihue, Hawaii, 2003.[https://www.usenix.org/events/hotos03/tech/full_papers/rajagopalan/rajagopalan.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[7] Kumar, Sanjeev and Christopher Wilkerson, &amp;lt;i&amp;gt;Exploiting Spatial Locality in Data Caches using Spatial Footprints&amp;lt;/i&amp;gt;, Princeton University and Microcomputer Research Labs (Oregon), 1998.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.1550&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[8] Jin, Shudong and Azer Bestavros, &amp;lt;i&amp;gt;Sources and Characteristics of Web Temporal Locality&amp;lt;/i&amp;gt;, Computer Science Depratment - Boston University, Boston. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.94.5941&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[9] Agarwal, Vikas &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Clock Rate versus IPS: The End of the Road for Conventional Microarhitechtures&amp;lt;/i&amp;gt;, University of Texas, Austin, 2000.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.3694&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[10] Tuomi, Ilkka, &amp;lt;i&amp;gt;The Lives and Death of Moore&#039;s Law&amp;lt;/i&amp;gt;, 2002.[http://131.193.153.231/www/issues/issue7_11/tuomi/ HTML]&lt;br /&gt;
&lt;br /&gt;
[11] BARHAM, P., DRAGOVIC, B., FRASER, K., HAND, S., HARRIS, T., HO, A., NEUGEBAUER, R., PRATT, I., AND WARFIELD, A. Xen and the art of virtualization. In Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP) (2003), pp. 164–177.&lt;br /&gt;
&lt;br /&gt;
[12] LARUS, J., AND PARKES, M. Using Cohort-Scheduling to Enhance Server Performance. In Proceedings of the annual conference on USENIX Annual Technical Conference (ATEC) (2002), pp. 103–114.&lt;br /&gt;
&lt;br /&gt;
[13] ARON, M., AND DRUSCHEL, P. Soft timers: efficient microsecond software timer support for network processing. ACM Trans. Comput. Syst. (TOCS) 18, 3 (2000), 197–228.&lt;br /&gt;
&lt;br /&gt;
[14] DRUSCHEL, P., AND BANGA, G. Lazy receiver processing (LRP): a network subsystem architecture for server systems. In Proceedings of the 2nd USENIX Symposium on Operating Systems Design and Implementation (OSDI) (1996), pp. 261–275.&lt;br /&gt;
&lt;br /&gt;
[15] CHAKRABORTY, K., WELLS, P. M., AND SOHI, G. S. Computation Spreading: Employing Hardware Migration to Specialize CMP Cores On-the-fly. In Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) (2006), pp. 283–292.&lt;br /&gt;
&lt;br /&gt;
[16] Vasudevan, Vijay. &amp;lt;i&amp;gt;Improving Datacenter Energy Efficiency Using a Fast Array of Wimpy Nodes&amp;lt;/i&amp;gt;, Thesis Proposal, Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, October 12, 2010.[http://www.cs.cmu.edu/~vrv/proposal/vijay_thesis_proposal.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[17] Patricia J. Teller &amp;lt;i&amp;gt;Translation-Lookaside Buffer Consistency&amp;lt;/i&amp;gt;, Journal Volume 23 Issue 6, IBM T. J. Watson Research Center, Yorktown Heights, NY, June 1990. [http://dx.doi.org/10.1109/2.55498 HTML]&lt;br /&gt;
&lt;br /&gt;
[18] Linux ABI sourceforge page. [http://linux-abi.sourceforge.net/ HTML] and Linux application page. [http://www.linux.org/apps/AppId_8088.html HTML]&lt;br /&gt;
&lt;br /&gt;
[19] DREPPER, U., AND MOLNAR , I. &amp;lt;i&amp;gt;The Native POSIX Thread Library for Linux&amp;lt;/i&amp;gt;. Tech. rep., RedHat Inc, 2003. [http://people.redhat.com/drepper/nptl-design.pdf HTML]&lt;br /&gt;
&lt;br /&gt;
[20] M. Brian Blake, &amp;lt;i&amp;gt;Coordinating Multiple Agents for Workflow-Oriented Process Orchestration&amp;lt;/i&amp;gt;. Information Systems and e-Business Management Journal, Springer-Verlag, December 2003. [http://www.cs.georgetown.edu/~blakeb/pubs/blake_ISEB2003.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[21] DeveloperWorks, Kernel Command using Linux System Calls IBM,2010.[http://www.ibm.com/developerworks/linux/library/l-system-calls/ HTML]&lt;br /&gt;
&lt;br /&gt;
[22] McCracken, Dave. &amp;lt;i&amp;gt;POSIX Threads and the Linux Kernel&amp;lt;/i&amp;gt;, IBM Linux Technology Center, Austin, TX. In Proceedings of the Ottawa Linux Symposium, 2002. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.2.8887&amp;amp;rep=rep1&amp;amp;type=pdf#page=330 PDF]&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6514</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6514"/>
		<updated>2010-12-02T21:24:30Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay. It is essential for one to comprehend the basic concepts of the language that is spoken in the paper in order to fully understand the ideas that are being discussed. The most important notions discussed in the FlexSC paper that are at the core of it all, are System calls[21], and Sychronous systems. These base definitons along with numerous other helpful ideas can be understood through the section of the paper to follow. &lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
In order to fully understand the FlexSC paper, it is essential to understand the key concepts that are discussed within the paper. Here listed below, are the main concepts required to fully comprehend the paper. It is more vital to the reader to understand the core ideas of these definitions, along with the underlying motivation for their existence, than to understand the miniscule details of their processes. &lt;br /&gt;
&lt;br /&gt;
===System Call===&lt;br /&gt;
A &amp;lt;b&amp;gt;System Call&amp;lt;/b&amp;gt; is the gateway between the User Space and the Kernel Space. The User Space is not given direct access to the Kernel&#039;s services, for several reasons (one being security), hence System calls are the messengers between the User and Kernel Space.[1][4]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode Switch===&lt;br /&gt;
&amp;lt;b&amp;gt;Mode Switches&amp;lt;/b&amp;gt; speak of moving from one medium to another. Specifically moving from the User Space mode to the Kernel mode or Kernel mode to User Space. It does not matter which direction or which modes we are switching from, this is simply a general term. Crucial to mode switching is the &amp;lt;b&amp;gt;mode switch time&amp;lt;/b&amp;gt; which is the time necessary to execute a system call instruction in user-mode, perform the kernel mode execution of the system call, and finally return the execution back to user-mode.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Synchronous System Call===&lt;br /&gt;
&amp;lt;b&amp;gt;Synchronous Execution Model(System call Interface)&amp;lt;/b&amp;gt; refers to the structure in which system calls specifically are managed in a serialized manner. Moreover, the synchronous model completes one system call at a time, and does not move onto the next system call until the previous system call is finished executing. This form of system call is blocking, meaning the process which initiates the system call is blocked until the system call returns. Traditionally, operating system calls are mostly synchronous system calls.[1][2]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Asynchronous System Call===&lt;br /&gt;
An &amp;lt;b&amp;gt;asynchronous system call&amp;lt;/b&amp;gt; is a system call which does not block upon invocation; control of execution is returned to the calling process immediately. Asynchronous system calls do not necessarily execute in order and can be compared to event driven programming.[2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Pollution===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Pollution&amp;lt;/b&amp;gt; is a more sophisticated manner of referring to wasteful or un-necessary delay in the system caused by system calls. This pollution is in direct correlation with the fact that the system call invokes a mode switch which is not a costless task. The &amp;quot;pollution&amp;quot; involved takes the form of data over-written in critical processor structures like the TLB (translation look-aside buffer - table which reduces the frequency of main memory access for page table entries), branch prediction tables, and the cache (L1, L2, L3).[1][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Pipeline Flushing===&lt;br /&gt;
The regular operation of a CPU has multiple instructions being fetched, decoded and executed at the same time.  The parallel processing of instructions provides a significant speed advantage in processing.  During a mode switch, however, instructions in the  user-mode pipeline are flushed and removed from the processor registers.[1]  These lost instructions are part of the cost of a mode switch.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Processor Exceptions===&lt;br /&gt;
&amp;lt;b&amp;gt;Processor exceptions&amp;lt;/b&amp;gt; are situations which cause the processor to stop current execution unexpectedly in order to handle the issue. There are many situations which generate processor exceptions including undefined instructions and software interrupts(system calls).[5]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Batching&amp;lt;/b&amp;gt; is the concept of collecting system calls together to be executed in a group instead of executing them immediately after they are called.[6]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Temporal and Spatial Locality===&lt;br /&gt;
Locality is the concept that during execution there will be a tendency for the same set of data to be accessed repeatedly over a brief time period. There are two important forms of locality; &amp;lt;b&amp;gt; spatial locality&amp;lt;/b&amp;gt; and &amp;lt;b&amp;gt;temporal locality&amp;lt;/b&amp;gt;. Spatial locality refers to the pattern that memory locations in close physical proximity will be referenced close together in a short period of time. Temporal locality, on the other hand, is the tendency of recently requested memory locations to be requested again.[7][8]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Instructions Per Cycle (IPC)===&lt;br /&gt;
&amp;lt;b&amp;gt;Instructions per cycle&amp;lt;/b&amp;gt; is the amount of instructions a processor can execute in a single clock cycle.[9]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Translation Look-Aside Buffer (TLB)===&lt;br /&gt;
A TLB is a table used in a virtual memory system that lists the physical address page number associated with each virtual address page number. A TLB is used in conjunction with a cache whose tags are based on virtual addresses. The virtual address is presented simultaneously to the TLB and to the cache so that cache access and the virtual-to-physical address translation can proceed in parallel. If the requested address is not cached then the physical address is used to locate the data in main memory. &lt;br /&gt;
&lt;br /&gt;
The TLB is the reason context switches can have such large performance penalties. Every time the OS switches context, the entire buffer is flushed. When the process resumes, it must be rebuilt from scratch. Too many context switches will therefore cause an increase in cache misses and degrade performance.[17]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Lack of Locality ===&lt;br /&gt;
As per paper, locality refers to both types of locality, i.e. temporal and spatial, defined above. Thus, lack of locality here means data and instructions needed most frequently by the application continues to be switched back and forth (from registers and caches) due to system calls, attributing hence, to performance degradation.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Throughput ===&lt;br /&gt;
Is an indication of how much work is done during a unit of time. E.g. n transactions per hour. The higher n is, the better. [2. P151]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Regular Store Instructions ===&lt;br /&gt;
A store instruction simply refers to a typical assembly language instruction, where, usually, there are two arguments. A value, and a memory location, where that value should be stored.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Linux Application Binary Interface (ABI)===&lt;br /&gt;
The ABI is a patch to the kernel that allows you to run SCO, Xenix, Solaris ix86, and other binaries on Linux.[18]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Native POSIX Thread Library (NPTL)===&lt;br /&gt;
NPTL is a software component that allows the Linux kernel to run applications optimized for POSIX Thread efficiency.[19]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Syscall Page ===&lt;br /&gt;
A syscall page is a collection of syscall entries. In turn, a sysentry is a 64-byte data structure, which includes information such as syscall number, number of arguments,&lt;br /&gt;
the arguments, status, and return value [1].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Syscall Threads ===&lt;br /&gt;
Syscall threads is FlexSC&#039;s method to allow exception-less system calls. A syscall thread shares its process virtual address space [1].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Latency ===&lt;br /&gt;
Latency is a measure of the time delay between the start of an action and its completion in a system.[20]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
System calls provide an interface for user-mode applications to request services from the operating system. Traditionally, the system call interface has been implemented using synchronous system calls, which block the calling user-space process when the system call is initiated. The benefit of using synchronous system calls comes from the easy to program nature of having sequential operation. However, this ease of use also comes with undesireable side effects which can slow down the instructions per cycle (IPC) of the processor.[9] In &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, Soares and Stumm attempt to provide a new form of system call which minimizes the negative effects of synchronous system calls while still remaining easy to implement for application programmers.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The negative effects of synchronous system calls have been researched heavily, it is accepted that although easy to use, they are not optimal. Previous research includes work into &amp;lt;b&amp;gt;system call batching&amp;lt;/b&amp;gt; such as multi-calls[6], &amp;lt;b&amp;gt;locality of execution with multicore systems&amp;lt;/b&amp;gt;[7][8], and &amp;lt;b&amp;gt;non-blocking execution&amp;lt;/b&amp;gt;. System call batching shares great similarity with FlexSC as multiple system calls are grouped together to reduce the amount of mode switches required of the system.[6] The difference is multi-calls do not make use of parallel execution of system calls nor do they manage the blocking aspect of synchronous system calls. FlexSC describes methods to handle both of these situations as described in the &amp;lt;b&amp;gt;Contribution&amp;lt;/b&amp;gt; section of this document.[1] Previous research into locality of execution and multicore systems has focused on managing device interrupts and limiting processor interference associated with interrupt handling.[7][8] However, these solutions require a microkernel solution and although they can dedicate certain execution to specific cores of a system, they can not dynamically adapt to the proportion of cores used by the kernel and the cores shared between the kernel and the user like FlexSC can.[1] Non-blocking execution research has focused on threading, event-based (non-blocking) and hybrid solutions. However, FlexSC provides a mechanism to separate system call execution from system call invocation. This is a key difference between FlexSC and previous research.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
===Exception-Less System Calls===&lt;br /&gt;
Exception-less system calls are the research team&#039;s attempt to provide an alternative to synchronous systems calls. The downside to synchronous system calls includes the cumulative mode switch time of multiple system calls each called independently, state pollution of key processor structures (TLB, cache, etc.)[1][3], and, potentially the most crucial, the performance impact on the user-mode application during a system call. Exception-less system calls attempt to resolve these three issues through:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
1. &amp;lt;u&amp;gt;System Call Batching:&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Instead of having each system call run as soon as it is called, FlexSC instead groups together system calls into batches. These batches can then be executed at one time thus minimizing the     frequency of mode switches bewteen user and kernel modes. Batching provides a benefit both in terms of the direct cost of mode switching as well as the indirect cost, pollution of critical processor structures, associated with switching modes. System call batching works by first requesting as many system calls as possible, then switching to kernel mode, and then executing each of them.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2. &amp;lt;u&amp;gt;Core Specialization&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
On a multi-core system, FlexSC can provide the ability to designate a single core to run all system calls. The reason this is possible is that for an exception-less system call, the system call execution is decoupled from the system call invocation. This is described further in &amp;lt;b&amp;gt;Decoupling Execution from Invocation&amp;lt;/b&amp;gt; section below.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
3. &amp;lt;u&amp;gt;Exception-less System Call Interface&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
To provide an asynchronous interface to the kernel, FlexSC uses &amp;lt;b&amp;gt;syscall pages&amp;lt;/b&amp;gt;. Syscall pages are a set of memory pages shared between user-mode and kernel-mode. User-space threads interact with syscall pages in order to make a request (system call) for kernel-mode procedures. A user-mode thread may make a system call request on a free entry of a syscall page, the syscall page will then run once the batch condition is met and store the return value on the syscall page. The user-mode thread can then return to the syscall page to obtain the return value. Neither issuing the system call via the syscall page nor getting the return value from the syscall page generate a processor exception. Each syscall page is a table of syscall entries. These entries may have one of three states: &amp;lt;b&amp;gt;Free&amp;lt;/b&amp;gt; - meaning a syscall can be added to the entry; &amp;lt;b&amp;gt;Submitted&amp;lt;/b&amp;gt; - meaning the kernel can proceed to invoke the appropriate system call operations; and &amp;lt;b&amp;gt;Done&amp;lt;/b&amp;gt; - meaning the kernel is finished and the return value is ready for the user-mode thread to retrieve it.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
4. &amp;lt;u&amp;gt;Decoupling Execution from Invocation&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In order to separate a system call invocation from the execution of the system call, &amp;lt;b&amp;gt;syscall threads&amp;lt;/b&amp;gt; were created. The sole purpose of syscall threads is to pull requests from syscall pages and execute the request, always in kernel mode. This is the mechanic that allows exception-less system calls to provide the ability for a user-mode thread to issue a request and continue to run while the kernel level system call is being executed. In addition, since the system call invocation is separate from execution, a process running on one core may request a system call yet the execution of the system call may be completed on an entirely different core. This allows exception-less system calls the unique capability of having all system call execution delegated to a specific core while other cores maintain user-mode execution.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===FlexSC Threads===&lt;br /&gt;
As mentioned above, FlexSC threads are a key component of the exception-less system call interface. FlexSC threads transform regular, synchronous system calls into exception-less system calls and are compatible with both the POSIX and default Linux thread libraries. This means that FlexSC Threads are immediately capable of running multi-threaded Linux applications with no modifications. The intended use of these threads is with server-type applications which contain many user-mode threads. In order to accomodate multiple user-mode threads, the FlexSC interface provides a syscall page for each core of a system. In this manner, multiple user-mode threads can be multiplexed onto a single syscall page which in turn has a single kernel level thread to facilitate execution of the system calls. Programming with FlexSC threads can be compared to event-driven programming as interactions are not guaranteed to be sequential. This does increase the complexity of programming for an exception-less system call interface as compared to the relatively simple synchronous system call interface.[1][2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
FlexSC Threads implement an M-on-N threading model, where M is the number of user based threads and N is the number of syscall threads, M being greater than N [1]. This is similar to one of the common styles of implementing threads, referred to as M:N model [22], where M represents user space threads, and N represents Kernel threads. The M-on-N model however uses FlexSC’s exception-less system call mechanism (transparently), i.e. its inner workings differ, from the otherwise typical M:N model. The M-on-N model takes advantage of maximizing the number of user space threads switching. In other words, as long as a user space thread is ready, work will continue in user space, minimizing thus the time spent on blocking.&lt;br /&gt;
&lt;br /&gt;
===Implementation and Statistics===&lt;br /&gt;
Up until this point, we have discussed and illustrated the theory behind FlexSC&#039;s logic patterns. However, it is also as important to demonstrate the practical and statistical results during implementation. FlexSC&#039;s implementation has shown a very large leap from todays synchronous system call method currently in use. &lt;br /&gt;
Firstly, demonstration of the current synchronous systems statistics will be discussed. The data collected by the researchers show that the System Calls cause a large delay in the User IPC(Interprocess Communication) between 20% and 60%. To get a general idea of how many instructions are executed in one system call here we name a few with their corresponding number of instructions. According to the researchers of this paper, pread(),pwrite(), open+close() require 3739, 5689 and 6631 instructions respectively. Therefore, these system calls, are not minuscule lines of code, that can be overlooked, they take a large amount of Time devoted to them, hence FlexSC attempts to lower this overall value by methods mentioned above in the paper such as system call batching.  &lt;br /&gt;
&lt;br /&gt;
On a single core, the System-Less Batching produces an improved time from 55 nanoseconds to about 35 nanoseconds once 15 or more batch requests have been utilized. Moreover, the throughput(requests/sec) shows a dramatic increase of 10,000 and more in Apache throughput of Linux. This throughput increases with the more cores implemented in the system. This demonstrates our theory of how the FlexSC&#039;s main application improvements in systems implementing multiple cores.   &lt;br /&gt;
&lt;br /&gt;
Furthermore, on a 4 core system, 14% of the original 50% idle time is being used in the User Space now. Hence, where the system was once spending time doing nothing, it is now being put to get use and running the User Space with more aid. Latency as well is decreased by half of the original time on 1,2 and 4 core implementations. Also very interesting test case the researchers investigated was the idea of having numerous requests occurring at once in the system. They chose to execute 256 synchronous requests in the system and what they came to find is that FlexSC&#039;s method is 60ms faster than the current synchronous method and continually increases with the more cores in use.   &lt;br /&gt;
&lt;br /&gt;
Through the different methods being executed by the FlexSC&#039;s concepts, the statistics taken of the system demonstrates that their theory does hold in practice. This illustrates that we can being implementing this system into our computers we use every day and begin to see results in front of our eyes. ***i&#039;m not writing this section, will be back to work on it in an hour, still need some fixing here and there i know. &lt;br /&gt;
&lt;br /&gt;
== Critique: ==&lt;br /&gt;
&lt;br /&gt;
===Moore&#039;s Law===&lt;br /&gt;
One interesting aspect of this paper is how the research relates to Moore&#039;s Law. Moore&#039;s Law states that the number of transistors on a chip doubles every 18 months.[10]. This has lead to very large increases in the performance potential of software but at the same time has opened a large gap between the actual performance of efficient and inefficient software. This paper claims that the gap is mainly caused by disparity of accessing different processor resources such as registers, cache and memory.[1] In this manner, the FlexSC interface is not just an attempt to increase the efficiency of current system calls, but it is actually an attempt to change the way we view software. It is not simply enough to continue to build more powerful machines if the code we currently run will not speed up (become more efficient) along with the gain of power. Instead we need to focus on appropriate allocation and usage of the power as failure to do so is the origination of the gap between our potential and our performance.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Performance of FlexSC===&lt;br /&gt;
It is of particular interest to note that exception-less system calls only outperformed synchronous system calls when the system was running multiple system calls. For an individual system call, the overhead of the FlexSC interface was greater than a synchronous call. The real benefit of FlexSC comes when there are many system calls which can be in turn batched before execution. In this situation the FlexSC system far outperformed the traditional synchronous system calls.[1] This is why the research paper&#039;s focus is on server-like applications as server must handle many user requests efficiently to be useful. Thus, for a general case it appears that a hybrid solution of synchronous calls below some threshold and execption-less system calls above the same threshold would be most efficient.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Blocking Calls===&lt;br /&gt;
FlexSC relies on the fact that web and database servers have a lot of concurrency and independent parallelism. FlexSC can &#039;harvest&#039; enough independent work so that it doesn&#039;t need to track dependencies between system calls. However, this could be a problem in other situations. Since FlexSC system calls are &#039;inherently asynchronous&#039;, if they need to block, FlexSC would jump to the next system call and execute that one. This can cause a problem for system calls such as reading and writing, where the write call has an outstanding dependency on the read call. However, this could be resolved by using some kind of combined system call, that is, multiple system calls executed as one single call. Unfortunately, FlexSC does not have any current handling for such an implementation.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Core Scheduling Issues===&lt;br /&gt;
In a system with X cores, FlexSC needs to dedicate some subset of cores for system calls. Currently, FlexSC first wakes up core X to run a system call thread, and when another batch comes in, if core X is still busy, it will then try core X-1, and so on. Of all the algorithms they tested, it turned out that this, the simplest algorithm, was the most efficient algorithm for FlexSC scheduling. However, this was only tested with FlexSC running a single application at a time. FlexSC&#039;s scheduling algorithm would need to be fine-tuned for running multiple applications.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When There Are Not More Threads Then Cores===&lt;br /&gt;
In situations where there is a single thread using 100% of a CPU, and acting primarily in user-space, such as &#039;Scientific Programs&#039;, FlexSC causes more overhead then performance gains. As a result, FlexSC is not an optimal implementation for cases such as this.&lt;br /&gt;
&lt;br /&gt;
===IO === &lt;br /&gt;
FlexSC is not suited for data intensive, IO centric applications, as realized by Vijay Vasudevan [16]. Vijay&#039;s research aims to reduce the energy footprint in data centers. FlexSC&amp;lt;br&amp;gt;&lt;br /&gt;
was considered. It was found that FlexSC&#039;s reduction of mode switches, via the use of shared memory pages between user space and kernel space is useful for reducing the impact&amp;lt;br&amp;gt;&lt;br /&gt;
of system calls. That technique however was not useful for IO intensive work since it did not remove the requirement of data copying and did not reduce the overheads associated&amp;lt;br&amp;gt;&lt;br /&gt;
with interrupts in IO intensive tasks.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Some Kernel Changes Are Required===&lt;br /&gt;
Though most of the work is done transparently. i.e. there is no need for application&#039;s code modification, there remains a need for small kernel change (3 lines of code), as per section 3.2 of the paper [1].&amp;lt;br&amp;gt;&lt;br /&gt;
That means adopters, and after each update of the kernel, would have to add/modify the referenced lines and then recompile the kernel.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Multicore Systems ===&lt;br /&gt;
For a multicore system, the FlexSC scheduler will attempt to choose a subset of the available cores and specialize them for running system call threads. It is unclear how the dynamic&amp;lt;br&amp;gt;&lt;br /&gt;
allocation is done. It is mentioned that decisions are made based on the workload requirements, which doesn&#039;t exactly clarify the mechanism.&amp;lt;br&amp;gt;&lt;br /&gt;
Further, the paper mentions that a predefined, static list of cores is used for system call threads assignments. It is unclear when that list is created. Is it at installation time,&amp;lt;br&amp;gt;&lt;br /&gt;
is it generated initially, or does the installer have to do any manual work. On a related note, scalability with increased cores is ambiguous. It is not that clear how scalable the&amp;lt;br&amp;gt;&lt;br /&gt;
scheduler is. One gets the impression that it is very scalable due to the fact that each core spawns a system call thread. Thus, as many threads as there are cores could be running&amp;lt;br&amp;gt;&lt;br /&gt;
concurrently, for one or more processes [1]. More explicit results however would&#039;ve been beneficial. Further, the paper mentions that hyper-threading was turned off to ease the analysis&amp;lt;br&amp;gt;&lt;br /&gt;
of the results. Understandable, however, it would be nice to know if these threads (2 per core) would actually be treated as a core when turned on ? I.e. would the scheduler then realize&amp;lt;br&amp;gt;&lt;br /&gt;
that it can use eight cores ? Does that also mean the predefined static cores list would need to be modified, to list eight instead of four ?&amp;lt;br&amp;gt;  &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
Along the same reasoning, and given the growing popularity of GPU&#039;s use for general programming, it would&#039;ve been useful to at-least hypothesize on the possible performance&amp;lt;br&amp;gt;&lt;br /&gt;
outcome when using specialized GPUs, like NVIDIA&#039;s Tesla GPUs for example. Would FlexSC&#039;s scheduler be able to take advantage of the additional cores, and hence use them for&amp;lt;br&amp;gt;&lt;br /&gt;
specialized purposes ?&lt;br /&gt;
&lt;br /&gt;
== Related Work: ==&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&lt;br /&gt;
Muti-calls is a concept which involves collecting multiple system calls and submitting them as a single system call. It is used both in operating systems and paravirtualized hypervisors. The Cassyopia compiler has a special technique name a looped multi-call, which is an additional process where the result of one system call can be fed as an argument to another system call in the same multi-call.[11] There is a significant difference between multi-calls and exception-less system calls. Multi-calls do not investigate parallel execution of system calls, nor do they address the blocking of system calls like exception-less system calls. Multi-call system calls are executed sequentially, each one must complete before the next may start. On the other hand, exception-less system calls can be executed in parallel, and in the presence of blocking, the next call can execute immediately.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Locality of Execution and Multicores===&lt;br /&gt;
&lt;br /&gt;
Several techniques addressed the issue of locality of execution. Larus and Parkes proposed Cohort Scheduling to efficiently execute staged computations.[12] Other techniques include Soft Timers[13] and Lazy Receiver[14] Processing which try to tackle the issue of locality of execution by handling device interrupts. They both try to limit processor interference associated with interrupt handling without affecting the latency of servicing requests. Another technique name Computation Spreading[15] is most similar to the multicore execution of FlexSC. Processor modifications that allow hardware migration of threads and migration to specialized cores. However, they did not model TLBs on current hardware synchronous thread migration is a costly interprocessor interrupt. Another solution has 2 difference between FlexSC. They require a micro-kernel. Also FlexSC can dynamically adapt the proportion of cores used by the kernel or cores shared by user and kernel execution dynamically. While all these solutions rely on expensive inter-processor interrupts to offload system calls, FlexSC could provide a more efficient, and flexible mechanism.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Non-blocking Execution===&lt;br /&gt;
&lt;br /&gt;
Past research on improving system call performance has focused extensively on blocking versus non-blocking behavior. Typically researchers used threading, event-based, which is non-blocking and hybrid systems to obtain high performance on server applications. The main difference between many of the proposals for non-blocking execution and FlexSC is that none of the non-blocking system calls have decoupled the system call invocation from its execution.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References: ==&lt;br /&gt;
[1] Soares, Livio and Michael Stumm, &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, University of Toronto, 2010.[http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Tanenbaum, Andrew S., &amp;lt;i&amp;gt;Modern Operating Systems: 3rd Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2008.&lt;br /&gt;
&lt;br /&gt;
[3] Stallings, William, &amp;lt;i&amp;gt;Operating Systems: Internals and Design Principles - 6th Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2009.&lt;br /&gt;
&lt;br /&gt;
[4] Garfinkel, Tim, &amp;lt;i&amp;gt;Traps and Pitfalls: Practical Problems in System Call Interposition Based Security Tools&amp;lt;/i&amp;gt;, Computer Science Department - Stanford University.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.144.2695&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[5] Yoo, Sunjoo &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Automatic Generation of Fast Timed Simulation Models for Operating Systems in SoC Design&amp;lt;/i&amp;gt;, SLS Group, TIMA Laboratory, Grenoble, 2002.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.1148&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[6] Rajagopalan, Mohan &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Cassyopia: Compiler Assisted System Optimization&amp;lt;/i&amp;gt;, Poceedings of HotOS IX: The 9th Workshop on Hot Topics in Operating Systems, Lihue, Hawaii, 2003.[https://www.usenix.org/events/hotos03/tech/full_papers/rajagopalan/rajagopalan.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[7] Kumar, Sanjeev and Christopher Wilkerson, &amp;lt;i&amp;gt;Exploiting Spatial Locality in Data Caches using Spatial Footprints&amp;lt;/i&amp;gt;, Princeton University and Microcomputer Research Labs (Oregon), 1998.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.1550&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[8] Jin, Shudong and Azer Bestavros, &amp;lt;i&amp;gt;Sources and Characteristics of Web Temporal Locality&amp;lt;/i&amp;gt;, Computer Science Depratment - Boston University, Boston. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.94.5941&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[9] Agarwal, Vikas &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Clock Rate versus IPS: The End of the Road for Conventional Microarhitechtures&amp;lt;/i&amp;gt;, University of Texas, Austin, 2000.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.3694&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[10] Tuomi, Ilkka, &amp;lt;i&amp;gt;The Lives and Death of Moore&#039;s Law&amp;lt;/i&amp;gt;, 2002.[http://131.193.153.231/www/issues/issue7_11/tuomi/ HTML]&lt;br /&gt;
&lt;br /&gt;
[11] BARHAM, P., DRAGOVIC, B., FRASER, K., HAND, S., HARRIS, T., HO, A., NEUGEBAUER, R., PRATT, I., AND WARFIELD, A. Xen and the art of virtualization. In Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP) (2003), pp. 164–177.&lt;br /&gt;
&lt;br /&gt;
[12] LARUS, J., AND PARKES, M. Using Cohort-Scheduling to Enhance Server Performance. In Proceedings of the annual conference on USENIX Annual Technical Conference (ATEC) (2002), pp. 103–114.&lt;br /&gt;
&lt;br /&gt;
[13] ARON, M., AND DRUSCHEL, P. Soft timers: efficient microsecond software timer support for network processing. ACM Trans. Comput. Syst. (TOCS) 18, 3 (2000), 197–228.&lt;br /&gt;
&lt;br /&gt;
[14] DRUSCHEL, P., AND BANGA, G. Lazy receiver processing (LRP): a network subsystem architecture for server systems. In Proceedings of the 2nd USENIX Symposium on Operating Systems Design and Implementation (OSDI) (1996), pp. 261–275.&lt;br /&gt;
&lt;br /&gt;
[15] CHAKRABORTY, K., WELLS, P. M., AND SOHI, G. S. Computation Spreading: Employing Hardware Migration to Specialize CMP Cores On-the-fly. In Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) (2006), pp. 283–292.&lt;br /&gt;
&lt;br /&gt;
[16] Vasudevan, Vijay. &amp;lt;i&amp;gt;Improving Datacenter Energy Efficiency Using a Fast Array of Wimpy Nodes&amp;lt;/i&amp;gt;, Thesis Proposal, Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, October 12, 2010.[http://www.cs.cmu.edu/~vrv/proposal/vijay_thesis_proposal.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[17] Patricia J. Teller &amp;lt;i&amp;gt;Translation-Lookaside Buffer Consistency&amp;lt;/i&amp;gt;, Journal Volume 23 Issue 6, IBM T. J. Watson Research Center, Yorktown Heights, NY, June 1990. [http://dx.doi.org/10.1109/2.55498 HTML]&lt;br /&gt;
&lt;br /&gt;
[18] Linux ABI sourceforge page. [http://linux-abi.sourceforge.net/ HTML] and Linux application page. [http://www.linux.org/apps/AppId_8088.html HTML]&lt;br /&gt;
&lt;br /&gt;
[19] DREPPER, U., AND MOLNAR , I. &amp;lt;i&amp;gt;The Native POSIX Thread Library for Linux&amp;lt;/i&amp;gt;. Tech. rep., RedHat Inc, 2003. [http://people.redhat.com/drepper/nptl-design.pdf HTML]&lt;br /&gt;
&lt;br /&gt;
[20] M. Brian Blake, &amp;lt;i&amp;gt;Coordinating Multiple Agents for Workflow-Oriented Process Orchestration&amp;lt;/i&amp;gt;. Information Systems and e-Business Management Journal, Springer-Verlag, December 2003. [http://www.cs.georgetown.edu/~blakeb/pubs/blake_ISEB2003.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[21] DeveloperWorks, Kernel Command using Linux System Calls IBM,2010.[http://www.ibm.com/developerworks/linux/library/l-system-calls/ HTML]&lt;br /&gt;
&lt;br /&gt;
[22] McCracken, Dave. &amp;lt;i&amp;gt;POSIX Threads and the Linux Kernel&amp;lt;/i&amp;gt;, IBM Linux Technology Center, Austin, TX. In Proceedings of the Ottawa Linux Symposium, 2002. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.2.8887&amp;amp;rep=rep1&amp;amp;type=pdf#page=330 PDF]&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=6501</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=6501"/>
		<updated>2010-12-02T20:24:32Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Who is working on what ? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group 3 Essay=&lt;br /&gt;
&lt;br /&gt;
Hello everyone, please post your contact information here:&lt;br /&gt;
&lt;br /&gt;
Ben Robson [mailto:brobson@connect.carleton.ca brobson@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Rey Arteaga: rarteaga@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Corey Faibish: [mailto:corey.faibish@gmail.com corey.faibish@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Tawfic Abdul-Fatah: [mailto:tfatah@gmail.com tfatah@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Fangchen Sun: [mailto:sfangche@connect.carleton.ca sfangche@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Mike Preston: [mailto:michaelapreston@gmail.com michaelapreston@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Wesley L. Lawrence: [mailto:wlawrenc@connect.carleton.ca wlawrenc@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Can&#039;t access the video without a login as we found out in class, but you can listen to the speech and follow with the slides pretty easily, I just went through it and it&#039;s not too bad. Rarteaga&lt;br /&gt;
&lt;br /&gt;
==Question 3 Group==&lt;br /&gt;
*Abdul-Fatah Tawfic tafatah&lt;br /&gt;
*Arteaga Reynaldo rarteaga&lt;br /&gt;
*Faibish Corey   cfaibish&lt;br /&gt;
*Lawrence Wesley wlawrenc&lt;br /&gt;
*Preston Mike    mpreston&lt;br /&gt;
*Robson  Benjamin brobson&lt;br /&gt;
*Sun     Fangchen sfangche&lt;br /&gt;
&lt;br /&gt;
==Who is working on what ?==&lt;br /&gt;
Just to keep track of who&#039;s doing what --[[User:Tafatah|Tafatah]] 01:37, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Hey everyone, I have taken the liberty of trying to provide a good first start on our paper. I have provided many resources and filled in information for all of the sections. This is not complete, but it should make the rest of the work a lot easier. Please go through and add in pieces that I am missing (specifically in the Critique section) and then we can put this essay to bed. Also, please note that below I have included my notes on the paper so that if anyone feels they do not have time to read the paper, they can read my notes instead and still find additional materials to contribute with.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:22, 20 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Man, Mike: you did a nice job! I&#039;m reading through it now very thorough :) Since you pretty much turned all of your bulleted points from the discussion page into that on the main page, what else needs to be done? Just expanding on each topic and sub-topic? Or are there untouched concepts/topics that we should be addressing?&lt;br /&gt;
Oh and question two: Should we turn the Q&amp;amp;A from the end of the video of the presentation into information for the &#039;&#039;Critique&#039;&#039; section?&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 20:34, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Mike, thnx for the great job! I basically finished the part of related work based on your draft.&lt;br /&gt;
--[[User:sfangchen|Fangchen Sun]] 17:40, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
No problem, And great additions. &lt;br /&gt;
In terms of what needs to be done, I do believe that adding some detail to the critique is where we really need some focus. Using the Q&amp;amp;A from the video is probably a great source of inspiration, maybe just take a look at the topics presented, see if additional material from other sources can be obtain and use those sources to address any pros or cons to this artical. Remember, the critique section can be agreeing or disagreeing with what is presented in the actual paper.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 15:12, 28 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I noticied we needed some work in the Critique section, so I listened to the Q&amp;amp;A session at the end of the FlexSC mp3 talk, and took some quick notes. There seems to be 3 good ones (of the 9) that I picked out. I&#039;ll summarize them and add to the Critique section, specifically questions 3, 6, and 7. If anyone else wants to have listen to a specific question, and maybe try to do some more &#039;critiquing&#039; here is a list of what time the questions each take place, and a very general statement on what the question is about, and the very general answer:&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;1 - 22:30 &amp;lt;br&amp;gt;Q: Did the paper consider Upstream patches(?) &amp;lt;br&amp;gt;A:No&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;2 - 23:00 &amp;lt;br&amp;gt;Q: Security issues with the pages &amp;lt;br&amp;gt;A:Pages pre-processor, no issue&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;3 - 24:10 &amp;lt;br&amp;gt;Q: What about blocking calls (read/write)? &amp;lt;br&amp;gt;A: Not handled&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;4 - 25:50 &amp;lt;br&amp;gt;Q: ? &amp;lt;br&amp;gt;A: Not a problem? (Personally didn&#039;t understand question, don&#039;t believe it&#039;s important, but anyone whose willing should double check)&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;5 - 28:00 &amp;lt;br&amp;gt;Q: Compare pollution between user thread switching to user-kernel thread switching? &amp;lt;br&amp;gt;A: No, only looked at and measured pollution when switching user-to-kernel.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;6 - 29:30 &amp;lt;br&amp;gt;Q: Scheduling problems of what cores are &#039;system&#039; core, and what cores are &#039;user&#039; cores &amp;lt;br&amp;gt;A: Very simple algorithm, but not tested when running multiple apps, would need to be &amp;quot;fine-tuned&amp;quot;&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;7 - 31:00 &amp;lt;br&amp;gt;Q: Situations where FlexSC is bad, when running less or equal threads to the number of cores, such as &amp;quot;Scientific programs&amp;quot;, mostly in userspace where one thread has 100% CPU resource &amp;lt;br&amp;gt;A: Agrees, FlexSC is not to be used for such situations&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;8 - 33:00 &amp;lt;br&amp;gt;Q: Problems with un-related threads demanding service, how does it scale? Issue with frequency of polling could cause sys calls to take time to preform &amp;lt;br&amp;gt;A: (Would be answered offline)&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;9 - 34:30 &amp;lt;br&amp;gt;Q: Backwards compatability and robustness &amp;lt;br&amp;gt;A: Only an issue with getTID (Thread ID), needed a small patch.&lt;br /&gt;
&lt;br /&gt;
--[[User:Wlawrence|Wesley Lawrence]] 20:31, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Wrote information in Critique for questions 3, 6 and 7 (Blocking Calls, Core Scheduling Issues, and When There Are Not More Threads Then Cores). If you feel any additions need to be made, please feel free to add them. Most importantly, I&#039;m not sure how to cite these. All information as obtained from the mp3 of the presentation, could some one let me know how to go about citing this?&lt;br /&gt;
&lt;br /&gt;
--[[User:Wlawrence|Wesley Lawrence]] 21:05, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m going to run through the whole paper, and just make sure everything makes sense, and fill in the holes where needed. I&#039;ll also add my own thoughts along the way. Feel free to do the same.-Rarteaga&lt;br /&gt;
&lt;br /&gt;
Added 3 sections to the critique, definitions for the remaining terms (thanks Corey for taking care of some of these) and did some editing. My plan is to add some more flesh to the FlexSC-Threads section.&lt;br /&gt;
I&#039;ll do that sometime before 3PM on Thursday. I&#039;ll also go over the paper at that time in case something needs some editing. --[[User:Tafatah|Tafatah]] 06:38, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m going to be working on the contributions section under Implementation and demonstrating some statistics they showed in the paper. Rarteaga&lt;br /&gt;
&lt;br /&gt;
==Paper Summary==&lt;br /&gt;
I am not sure if everyone has taken the time to examine the paper closely, so I thought I would provide my notes on the paper so that anyone who has not read it could have a view of the high points.&lt;br /&gt;
&lt;br /&gt;
Abstract:&lt;br /&gt;
   - System calls are the accepted way to request services from the OS kernel, historical implementation.&lt;br /&gt;
   - System calls almost always synchronous &lt;br /&gt;
   - Aim to demonstrate how synchronous system calls negatively affect performance due mainly to pipeline flushing and pollution of key processor structures (TLB, data and instruction caches, etc.)&lt;br /&gt;
        o TLB is translation lookaside buffer which is uses pages (data and code pages) to speed up virtual translation speed.&lt;br /&gt;
   - Propose exception-less system calls to improve the current system call process.&lt;br /&gt;
        o Improve processor efficiency via enabling flexible scheduling of OS work which in turn reduces size of execution both in kernel and user space thus reducing pollution effects on processor structures.&lt;br /&gt;
   - Exception-less system calls especially effective on multi-core systems running multi-threaded applications.&lt;br /&gt;
   - FlexSC is an implementation of exception-less system calls in the Linux kernel with accompanying user-mode threads from FlexSC-Threads package.&lt;br /&gt;
        o Flex-SC-Threads convert legacy system calls into exception-less system calls.&lt;br /&gt;
Introduction:&lt;br /&gt;
   - Synchronous system calls have a negative impact on system performance due to:&lt;br /&gt;
        o Direct costs – mode switching&lt;br /&gt;
        o Indirect costs – pollution of important processor structures &lt;br /&gt;
   - Traditional system calls:&lt;br /&gt;
        o Involve writing arguments to appropriate registers as well as issuing a special machine instruction which raises a synchronous exception.&lt;br /&gt;
        o A processor exception is used to communicate with the kernel.&lt;br /&gt;
        o Synchronous execution is enforced as the application expects the completion of the system call before user-mode execution resumes.&lt;br /&gt;
   - Moore’s Law has provided large increases to performance potential of software while at the same time widening the gap between the performance of efficient and inefficient software.&lt;br /&gt;
        o This gap is mainly caused by disparity of accessing different processor resources (registers, caches, memory)&lt;br /&gt;
   - Server and system-intensive workloads are known to perform well below processor potential throughput.&lt;br /&gt;
        o These are the items the researchers are mostly interested in.&lt;br /&gt;
        o The cause is often described as due to the lack of locality.&lt;br /&gt;
        o The researchers state this lack of locality is in part a result of the current synchronous system calls.&lt;br /&gt;
   - When a synchronous system call, like pwrite, is called, the instruction per cycle level drops significantly and it takes many (in the example 14,000) cycles of execution for the instruction per cycle rate&lt;br /&gt;
 to return to the level it was at before the system (pwrite) call.&lt;br /&gt;
   - Exception-less System Call:&lt;br /&gt;
        o Request for kernel services that does not require the use of synchronous processor exceptions.&lt;br /&gt;
        o System calls are written to a reserved syscall page.&lt;br /&gt;
        o Execution of system calls is performed asynchronously by special kernel level syscall threads. The result of the execution is stored on the syscall page after execution.&lt;br /&gt;
   - By separating system call execution from system call invocation, the system can now have flexible system call scheduling.&lt;br /&gt;
        o This allows system calls to be executed in batches, increasing the temporal locality of execution.&lt;br /&gt;
        o Also provides a way to execute system calls on a separate core, in parallel to user-mode thread execution. This provides spatial per-core locality.&lt;br /&gt;
        o An additional side effect is that now a multi-core system can have individual cores designated to run either user-mode or kernel mode execution dynamically depending on the current system load.&lt;br /&gt;
   - In order to implement the exception-less system calls, the research team suggests adding a new M-on-N threading package.&lt;br /&gt;
        o M user-mode threads executing on N kernel-visible threads.&lt;br /&gt;
        o This would allow the threading package to harvest independent system calls, by switching threads, in user-mode, whenever a thread invokes a system call.&lt;br /&gt;
The (Real) Cost of System Calls&lt;br /&gt;
   - Traditional way to measure the performance cost of system calls is the mode switch time. This is the time necessary to execute the system call instruction in user-mode, resume execution in kernel mode and&lt;br /&gt;
 then return execution back to the user-mode.&lt;br /&gt;
   - Mode switch in modern processors is a processor exception.&lt;br /&gt;
        o Flush the user-mode pipeline, save registers onto the kernel stack, change the protection domain and redirect execution to the proper exception handler.&lt;br /&gt;
   - Another measure of the performance of a system call is the state pollution caused by the system call.&lt;br /&gt;
        o State pollution is the measure of how much user-mode data is overwritten in places like the TLB, cache (L1, L2, L3), branch prediction tables with kernel leel execution instructions for the system call. &lt;br /&gt;
        o This data must be re-populated upon the return to user-mode.&lt;br /&gt;
   - Potentially the most significant measure of cost of system calls is the performance impact on a running application.&lt;br /&gt;
        o Ideally, user-mode instructions per cycle should not decrease as a result of a system call.&lt;br /&gt;
        o Synchronous system calls do cause a drop in user-mode IPC  due to; direct overhead -  the processor exception associated with the system call which flushes the processor pipeline; and indirect overhead&lt;br /&gt;
 – system call pollution on processors structures.&lt;br /&gt;
Exception-less System calls:&lt;br /&gt;
   - System call batching&lt;br /&gt;
        o By delaying a series of system calls and executing them in batches you can minimize the frequency of mode switches between user and kernel mode.&lt;br /&gt;
        o Improves both the direct and indirect cost of system calls.&lt;br /&gt;
   - Core specialization&lt;br /&gt;
        o A system call can be scheduled on a different core then the core on which it was invoked, only for exception-less system calls.&lt;br /&gt;
        o Provides ability to designate a core to run all system calls.&lt;br /&gt;
   - Exception-less Syscall Interface&lt;br /&gt;
        o Set of memory pages shared between user and kernel modes. Referred to as Syscall pages.&lt;br /&gt;
        o User-space threads find a free entry in a syscall page and place a request for a system call. The user-space thread can then continue executing without interruption and must then return to the syscall&lt;br /&gt;
 page to get the return value from the system call.&lt;br /&gt;
        o Neither issuing the system call (via the syscall page) nor getting the return value generate an exception.&lt;br /&gt;
   - Syscall pages&lt;br /&gt;
        o Each page is a table of syscall entries.&lt;br /&gt;
        o Each syscall entre has a state:&lt;br /&gt;
                 Free – means a syscall can be added her&lt;br /&gt;
                 Submitted – means the kernel can proceed to invoke the appropriate system call operations.&lt;br /&gt;
                 Done – means the kernel is finished and has provided the return value to the syscall entry. User space thread must return and get the return value from the page.&lt;br /&gt;
   - Decoupling Execution from Invocation&lt;br /&gt;
        o To separate these two concepts a special kernel thread, syscall thread, is used.&lt;br /&gt;
        o Sole purpose is to pull requests from syscall pages and execute them always in kernel mode.&lt;br /&gt;
        o Syscall threads provide the ability to schedule the system calls on specific cores.&lt;br /&gt;
System Calls Galore – FlexSC-Threads&lt;br /&gt;
   - Programming for exception-less system calls requires a different and more complex way of interacting with the kernel for OS functionality.&lt;br /&gt;
        o The researchers describe working with exception-less system calls as being similar to event-driven programming in that you do not get the same sequential execution of code as you do with synchronous&lt;br /&gt;
 system calls.&lt;br /&gt;
        o In event-driven servers, the researchers suggest using a hybrid of both exception-less system calls (for performance critical paths) and regular synchronous system calls (for less critical system calls).&lt;br /&gt;
FlexSC-Threads&lt;br /&gt;
   - Threading package which transforms synchronous system calls into exception-less system calls.&lt;br /&gt;
   - Intended use is with server-type applications with which have many user-mode threads (like Apache or MySQL).&lt;br /&gt;
   - Compatible with both POSIX threads and the default Linux thread library.&lt;br /&gt;
        o As a result, multi-threaded Linux programs are immediately compatible with FlexSC threads without modification.&lt;br /&gt;
   - For multi-core systems, a single kernel level thread is created for each core of the system. Multiple user-mode threads are multiplexed onto each kernel level thread via interactions with the syscall pages.&lt;br /&gt;
        o The syscall pages are private to each kernel level thread, this means each core of a system has a syscall page from which it will receive system calls.&lt;br /&gt;
Overhead:&lt;br /&gt;
   - When running a single exception-less system call against a single synchronous system call, the exception-less call was slower.&lt;br /&gt;
   - When running a batch of exception-less system calls compared to a bunch of synchronous system calls, the exception-less system calls were much faster.&lt;br /&gt;
   - The same is true for a remote server situation, one synchronous call is much faster than one exception-less system call but a batch of exception-less system calls is faster than the same number&lt;br /&gt;
 of synchronous system calls.&lt;br /&gt;
Related Work:&lt;br /&gt;
   - System Call Batching&lt;br /&gt;
        o Operating systems have a concept called multi-calls which involves collecting multiple system calls and submitting them as a single system call.&lt;br /&gt;
        o The Cassyopia compiler has an additional process called a looped multi-call where the result of one system call can be fed as an argument to another system call in the same multi-call.&lt;br /&gt;
        o Multi-calls do not investigate parallel execution of system calls, nor do they address the blocking of system calls like exception-less system calls do.&lt;br /&gt;
                 Multi-call system calls are executed sequentially, each one must complete before the next may start.&lt;br /&gt;
   - Locality of Execution and Multicores&lt;br /&gt;
        o Other techniques include Soft Timers and Lazy Receiver Processing which try to tackle the issue of locality of execution by handling device interrupts. They both try to&lt;br /&gt;
 limit processor interference associated with interrupt handling without affecting the latency of servicing requests.&lt;br /&gt;
        o Computation Spreading is another locality process which is similar to FlexSC.&lt;br /&gt;
                 Processor modifications that allow hardware migration of threads and migration to specialized cores.&lt;br /&gt;
                 Did not model TLBs and on current hardware synchronous thread migration is a costly interprocessor interrupt.&lt;br /&gt;
        o Also have proposals for dedicating CPU cores to specific operating system functionality.&lt;br /&gt;
                 These solutions require a microkernel system.&lt;br /&gt;
                 Also FlexSC can dynamically adapt the proportion of cores used by the kernel or cores shared by user and kernel execution dynamically.&lt;br /&gt;
   - Non-blocking Execution&lt;br /&gt;
        o Past research on improving system call performance has focused on blocking versus non-blocking behaviour.&lt;br /&gt;
                 Typically researchers used threading, event-based (non-blocking) and hybrid systems to obtain high performance on server applications.&lt;br /&gt;
        o Main difference between past research and FlexSC is that none of the past proposals have decoupled system call execution from system call invocation.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 04:03, 20 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=6500</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=6500"/>
		<updated>2010-12-02T20:24:09Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Who is working on what ? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group 3 Essay=&lt;br /&gt;
&lt;br /&gt;
Hello everyone, please post your contact information here:&lt;br /&gt;
&lt;br /&gt;
Ben Robson [mailto:brobson@connect.carleton.ca brobson@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Rey Arteaga: rarteaga@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Corey Faibish: [mailto:corey.faibish@gmail.com corey.faibish@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Tawfic Abdul-Fatah: [mailto:tfatah@gmail.com tfatah@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Fangchen Sun: [mailto:sfangche@connect.carleton.ca sfangche@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Mike Preston: [mailto:michaelapreston@gmail.com michaelapreston@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Wesley L. Lawrence: [mailto:wlawrenc@connect.carleton.ca wlawrenc@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Can&#039;t access the video without a login as we found out in class, but you can listen to the speech and follow with the slides pretty easily, I just went through it and it&#039;s not too bad. Rarteaga&lt;br /&gt;
&lt;br /&gt;
==Question 3 Group==&lt;br /&gt;
*Abdul-Fatah Tawfic tafatah&lt;br /&gt;
*Arteaga Reynaldo rarteaga&lt;br /&gt;
*Faibish Corey   cfaibish&lt;br /&gt;
*Lawrence Wesley wlawrenc&lt;br /&gt;
*Preston Mike    mpreston&lt;br /&gt;
*Robson  Benjamin brobson&lt;br /&gt;
*Sun     Fangchen sfangche&lt;br /&gt;
&lt;br /&gt;
==Who is working on what ?==&lt;br /&gt;
Just to keep track of who&#039;s doing what --[[User:Tafatah|Tafatah]] 01:37, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Hey everyone, I have taken the liberty of trying to provide a good first start on our paper. I have provided many resources and filled in information for all of the sections. This is not complete, but it should make the rest of the work a lot easier. Please go through and add in pieces that I am missing (specifically in the Critique section) and then we can put this essay to bed. Also, please note that below I have included my notes on the paper so that if anyone feels they do not have time to read the paper, they can read my notes instead and still find additional materials to contribute with.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:22, 20 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Man, Mike: you did a nice job! I&#039;m reading through it now very thorough :) Since you pretty much turned all of your bulleted points from the discussion page into that on the main page, what else needs to be done? Just expanding on each topic and sub-topic? Or are there untouched concepts/topics that we should be addressing?&lt;br /&gt;
Oh and question two: Should we turn the Q&amp;amp;A from the end of the video of the presentation into information for the &#039;&#039;Critique&#039;&#039; section?&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 20:34, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Mike, thnx for the great job! I basically finished the part of related work based on your draft.&lt;br /&gt;
--[[User:sfangchen|Fangchen Sun]] 17:40, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
No problem, And great additions. &lt;br /&gt;
In terms of what needs to be done, I do believe that adding some detail to the critique is where we really need some focus. Using the Q&amp;amp;A from the video is probably a great source of inspiration, maybe just take a look at the topics presented, see if additional material from other sources can be obtain and use those sources to address any pros or cons to this artical. Remember, the critique section can be agreeing or disagreeing with what is presented in the actual paper.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 15:12, 28 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I noticied we needed some work in the Critique section, so I listened to the Q&amp;amp;A session at the end of the FlexSC mp3 talk, and took some quick notes. There seems to be 3 good ones (of the 9) that I picked out. I&#039;ll summarize them and add to the Critique section, specifically questions 3, 6, and 7. If anyone else wants to have listen to a specific question, and maybe try to do some more &#039;critiquing&#039; here is a list of what time the questions each take place, and a very general statement on what the question is about, and the very general answer:&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;1 - 22:30 &amp;lt;br&amp;gt;Q: Did the paper consider Upstream patches(?) &amp;lt;br&amp;gt;A:No&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;2 - 23:00 &amp;lt;br&amp;gt;Q: Security issues with the pages &amp;lt;br&amp;gt;A:Pages pre-processor, no issue&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;3 - 24:10 &amp;lt;br&amp;gt;Q: What about blocking calls (read/write)? &amp;lt;br&amp;gt;A: Not handled&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;4 - 25:50 &amp;lt;br&amp;gt;Q: ? &amp;lt;br&amp;gt;A: Not a problem? (Personally didn&#039;t understand question, don&#039;t believe it&#039;s important, but anyone whose willing should double check)&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;5 - 28:00 &amp;lt;br&amp;gt;Q: Compare pollution between user thread switching to user-kernel thread switching? &amp;lt;br&amp;gt;A: No, only looked at and measured pollution when switching user-to-kernel.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;6 - 29:30 &amp;lt;br&amp;gt;Q: Scheduling problems of what cores are &#039;system&#039; core, and what cores are &#039;user&#039; cores &amp;lt;br&amp;gt;A: Very simple algorithm, but not tested when running multiple apps, would need to be &amp;quot;fine-tuned&amp;quot;&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;7 - 31:00 &amp;lt;br&amp;gt;Q: Situations where FlexSC is bad, when running less or equal threads to the number of cores, such as &amp;quot;Scientific programs&amp;quot;, mostly in userspace where one thread has 100% CPU resource &amp;lt;br&amp;gt;A: Agrees, FlexSC is not to be used for such situations&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;8 - 33:00 &amp;lt;br&amp;gt;Q: Problems with un-related threads demanding service, how does it scale? Issue with frequency of polling could cause sys calls to take time to preform &amp;lt;br&amp;gt;A: (Would be answered offline)&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;9 - 34:30 &amp;lt;br&amp;gt;Q: Backwards compatability and robustness &amp;lt;br&amp;gt;A: Only an issue with getTID (Thread ID), needed a small patch.&lt;br /&gt;
&lt;br /&gt;
--[[User:Wlawrence|Wesley Lawrence]] 20:31, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Wrote information in Critique for questions 3, 6 and 7 (Blocking Calls, Core Scheduling Issues, and When There Are Not More Threads Then Cores). If you feel any additions need to be made, please feel free to add them. Most importantly, I&#039;m not sure how to cite these. All information as obtained from the mp3 of the presentation, could some one let me know how to go about citing this?&lt;br /&gt;
&lt;br /&gt;
--[[User:Wlawrence|Wesley Lawrence]] 21:05, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m going to run through the whole paper, and just make sure everything makes sense, and fill in the holes where needed. I&#039;ll also add my own thoughts along the way. Feel free to do the same.-Rarteaga&lt;br /&gt;
&lt;br /&gt;
Added 3 sections to the critique, definitions for the remaining terms (thanks Corey for taking care of some of these) and did some editing. My plan is to add some more flesh to the FlexSC-Threads section.&lt;br /&gt;
I&#039;ll do that sometime before 3PM on Thursday. I&#039;ll also go over the paper at that time in case something needs some editing. --[[User:Tafatah|Tafatah]] 06:38, 2 December 2010 (UTC)&lt;br /&gt;
I&#039;m going to be working on the contributions section under Implementation and demonstrating some statistics they showed in the paper. Rarteaga&lt;br /&gt;
&lt;br /&gt;
==Paper Summary==&lt;br /&gt;
I am not sure if everyone has taken the time to examine the paper closely, so I thought I would provide my notes on the paper so that anyone who has not read it could have a view of the high points.&lt;br /&gt;
&lt;br /&gt;
Abstract:&lt;br /&gt;
   - System calls are the accepted way to request services from the OS kernel, historical implementation.&lt;br /&gt;
   - System calls almost always synchronous &lt;br /&gt;
   - Aim to demonstrate how synchronous system calls negatively affect performance due mainly to pipeline flushing and pollution of key processor structures (TLB, data and instruction caches, etc.)&lt;br /&gt;
        o TLB is translation lookaside buffer which is uses pages (data and code pages) to speed up virtual translation speed.&lt;br /&gt;
   - Propose exception-less system calls to improve the current system call process.&lt;br /&gt;
        o Improve processor efficiency via enabling flexible scheduling of OS work which in turn reduces size of execution both in kernel and user space thus reducing pollution effects on processor structures.&lt;br /&gt;
   - Exception-less system calls especially effective on multi-core systems running multi-threaded applications.&lt;br /&gt;
   - FlexSC is an implementation of exception-less system calls in the Linux kernel with accompanying user-mode threads from FlexSC-Threads package.&lt;br /&gt;
        o Flex-SC-Threads convert legacy system calls into exception-less system calls.&lt;br /&gt;
Introduction:&lt;br /&gt;
   - Synchronous system calls have a negative impact on system performance due to:&lt;br /&gt;
        o Direct costs – mode switching&lt;br /&gt;
        o Indirect costs – pollution of important processor structures &lt;br /&gt;
   - Traditional system calls:&lt;br /&gt;
        o Involve writing arguments to appropriate registers as well as issuing a special machine instruction which raises a synchronous exception.&lt;br /&gt;
        o A processor exception is used to communicate with the kernel.&lt;br /&gt;
        o Synchronous execution is enforced as the application expects the completion of the system call before user-mode execution resumes.&lt;br /&gt;
   - Moore’s Law has provided large increases to performance potential of software while at the same time widening the gap between the performance of efficient and inefficient software.&lt;br /&gt;
        o This gap is mainly caused by disparity of accessing different processor resources (registers, caches, memory)&lt;br /&gt;
   - Server and system-intensive workloads are known to perform well below processor potential throughput.&lt;br /&gt;
        o These are the items the researchers are mostly interested in.&lt;br /&gt;
        o The cause is often described as due to the lack of locality.&lt;br /&gt;
        o The researchers state this lack of locality is in part a result of the current synchronous system calls.&lt;br /&gt;
   - When a synchronous system call, like pwrite, is called, the instruction per cycle level drops significantly and it takes many (in the example 14,000) cycles of execution for the instruction per cycle rate&lt;br /&gt;
 to return to the level it was at before the system (pwrite) call.&lt;br /&gt;
   - Exception-less System Call:&lt;br /&gt;
        o Request for kernel services that does not require the use of synchronous processor exceptions.&lt;br /&gt;
        o System calls are written to a reserved syscall page.&lt;br /&gt;
        o Execution of system calls is performed asynchronously by special kernel level syscall threads. The result of the execution is stored on the syscall page after execution.&lt;br /&gt;
   - By separating system call execution from system call invocation, the system can now have flexible system call scheduling.&lt;br /&gt;
        o This allows system calls to be executed in batches, increasing the temporal locality of execution.&lt;br /&gt;
        o Also provides a way to execute system calls on a separate core, in parallel to user-mode thread execution. This provides spatial per-core locality.&lt;br /&gt;
        o An additional side effect is that now a multi-core system can have individual cores designated to run either user-mode or kernel mode execution dynamically depending on the current system load.&lt;br /&gt;
   - In order to implement the exception-less system calls, the research team suggests adding a new M-on-N threading package.&lt;br /&gt;
        o M user-mode threads executing on N kernel-visible threads.&lt;br /&gt;
        o This would allow the threading package to harvest independent system calls, by switching threads, in user-mode, whenever a thread invokes a system call.&lt;br /&gt;
The (Real) Cost of System Calls&lt;br /&gt;
   - Traditional way to measure the performance cost of system calls is the mode switch time. This is the time necessary to execute the system call instruction in user-mode, resume execution in kernel mode and&lt;br /&gt;
 then return execution back to the user-mode.&lt;br /&gt;
   - Mode switch in modern processors is a processor exception.&lt;br /&gt;
        o Flush the user-mode pipeline, save registers onto the kernel stack, change the protection domain and redirect execution to the proper exception handler.&lt;br /&gt;
   - Another measure of the performance of a system call is the state pollution caused by the system call.&lt;br /&gt;
        o State pollution is the measure of how much user-mode data is overwritten in places like the TLB, cache (L1, L2, L3), branch prediction tables with kernel leel execution instructions for the system call. &lt;br /&gt;
        o This data must be re-populated upon the return to user-mode.&lt;br /&gt;
   - Potentially the most significant measure of cost of system calls is the performance impact on a running application.&lt;br /&gt;
        o Ideally, user-mode instructions per cycle should not decrease as a result of a system call.&lt;br /&gt;
        o Synchronous system calls do cause a drop in user-mode IPC  due to; direct overhead -  the processor exception associated with the system call which flushes the processor pipeline; and indirect overhead&lt;br /&gt;
 – system call pollution on processors structures.&lt;br /&gt;
Exception-less System calls:&lt;br /&gt;
   - System call batching&lt;br /&gt;
        o By delaying a series of system calls and executing them in batches you can minimize the frequency of mode switches between user and kernel mode.&lt;br /&gt;
        o Improves both the direct and indirect cost of system calls.&lt;br /&gt;
   - Core specialization&lt;br /&gt;
        o A system call can be scheduled on a different core then the core on which it was invoked, only for exception-less system calls.&lt;br /&gt;
        o Provides ability to designate a core to run all system calls.&lt;br /&gt;
   - Exception-less Syscall Interface&lt;br /&gt;
        o Set of memory pages shared between user and kernel modes. Referred to as Syscall pages.&lt;br /&gt;
        o User-space threads find a free entry in a syscall page and place a request for a system call. The user-space thread can then continue executing without interruption and must then return to the syscall&lt;br /&gt;
 page to get the return value from the system call.&lt;br /&gt;
        o Neither issuing the system call (via the syscall page) nor getting the return value generate an exception.&lt;br /&gt;
   - Syscall pages&lt;br /&gt;
        o Each page is a table of syscall entries.&lt;br /&gt;
        o Each syscall entre has a state:&lt;br /&gt;
                 Free – means a syscall can be added her&lt;br /&gt;
                 Submitted – means the kernel can proceed to invoke the appropriate system call operations.&lt;br /&gt;
                 Done – means the kernel is finished and has provided the return value to the syscall entry. User space thread must return and get the return value from the page.&lt;br /&gt;
   - Decoupling Execution from Invocation&lt;br /&gt;
        o To separate these two concepts a special kernel thread, syscall thread, is used.&lt;br /&gt;
        o Sole purpose is to pull requests from syscall pages and execute them always in kernel mode.&lt;br /&gt;
        o Syscall threads provide the ability to schedule the system calls on specific cores.&lt;br /&gt;
System Calls Galore – FlexSC-Threads&lt;br /&gt;
   - Programming for exception-less system calls requires a different and more complex way of interacting with the kernel for OS functionality.&lt;br /&gt;
        o The researchers describe working with exception-less system calls as being similar to event-driven programming in that you do not get the same sequential execution of code as you do with synchronous&lt;br /&gt;
 system calls.&lt;br /&gt;
        o In event-driven servers, the researchers suggest using a hybrid of both exception-less system calls (for performance critical paths) and regular synchronous system calls (for less critical system calls).&lt;br /&gt;
FlexSC-Threads&lt;br /&gt;
   - Threading package which transforms synchronous system calls into exception-less system calls.&lt;br /&gt;
   - Intended use is with server-type applications with which have many user-mode threads (like Apache or MySQL).&lt;br /&gt;
   - Compatible with both POSIX threads and the default Linux thread library.&lt;br /&gt;
        o As a result, multi-threaded Linux programs are immediately compatible with FlexSC threads without modification.&lt;br /&gt;
   - For multi-core systems, a single kernel level thread is created for each core of the system. Multiple user-mode threads are multiplexed onto each kernel level thread via interactions with the syscall pages.&lt;br /&gt;
        o The syscall pages are private to each kernel level thread, this means each core of a system has a syscall page from which it will receive system calls.&lt;br /&gt;
Overhead:&lt;br /&gt;
   - When running a single exception-less system call against a single synchronous system call, the exception-less call was slower.&lt;br /&gt;
   - When running a batch of exception-less system calls compared to a bunch of synchronous system calls, the exception-less system calls were much faster.&lt;br /&gt;
   - The same is true for a remote server situation, one synchronous call is much faster than one exception-less system call but a batch of exception-less system calls is faster than the same number&lt;br /&gt;
 of synchronous system calls.&lt;br /&gt;
Related Work:&lt;br /&gt;
   - System Call Batching&lt;br /&gt;
        o Operating systems have a concept called multi-calls which involves collecting multiple system calls and submitting them as a single system call.&lt;br /&gt;
        o The Cassyopia compiler has an additional process called a looped multi-call where the result of one system call can be fed as an argument to another system call in the same multi-call.&lt;br /&gt;
        o Multi-calls do not investigate parallel execution of system calls, nor do they address the blocking of system calls like exception-less system calls do.&lt;br /&gt;
                 Multi-call system calls are executed sequentially, each one must complete before the next may start.&lt;br /&gt;
   - Locality of Execution and Multicores&lt;br /&gt;
        o Other techniques include Soft Timers and Lazy Receiver Processing which try to tackle the issue of locality of execution by handling device interrupts. They both try to&lt;br /&gt;
 limit processor interference associated with interrupt handling without affecting the latency of servicing requests.&lt;br /&gt;
        o Computation Spreading is another locality process which is similar to FlexSC.&lt;br /&gt;
                 Processor modifications that allow hardware migration of threads and migration to specialized cores.&lt;br /&gt;
                 Did not model TLBs and on current hardware synchronous thread migration is a costly interprocessor interrupt.&lt;br /&gt;
        o Also have proposals for dedicating CPU cores to specific operating system functionality.&lt;br /&gt;
                 These solutions require a microkernel system.&lt;br /&gt;
                 Also FlexSC can dynamically adapt the proportion of cores used by the kernel or cores shared by user and kernel execution dynamically.&lt;br /&gt;
   - Non-blocking Execution&lt;br /&gt;
        o Past research on improving system call performance has focused on blocking versus non-blocking behaviour.&lt;br /&gt;
                 Typically researchers used threading, event-based (non-blocking) and hybrid systems to obtain high performance on server applications.&lt;br /&gt;
        o Main difference between past research and FlexSC is that none of the past proposals have decoupled system call execution from system call invocation.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 04:03, 20 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6200</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6200"/>
		<updated>2010-12-02T05:26:07Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Syscall Page */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay. It is essential for one to comprehend the basic concepts of the language that is spoken in the paper in order to fully understand the ideas that are being discussed. The most important notions discussed in the FlexSC paper that are at the core of it all, are System calls[21], and Sychronous systems. These base definitons along with numerous other helpful ideas can be understood through the section of the paper to follow. &lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
In order to fully understand the FlexSC paper, it is essential to understand the key concepts that are discussed within the paper. Here listed below, are the main concepts required to fully comprehend the paper. It is more vital to the reader to understand the core ideas of these defintions along with the underlying motivation for there existance, then to understand the miniscule details of their processes. &lt;br /&gt;
&lt;br /&gt;
===System Call===&lt;br /&gt;
A &amp;lt;b&amp;gt;System Call&amp;lt;/b&amp;gt; is the gateway between the User Space and the Kernel Space. The User Space is not given direct access to the Kernel&#039;s services, for several reasons (one being security), hence System calls are the messengers between the User and Kernel Space.[1][4]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode Switch===&lt;br /&gt;
&amp;lt;b&amp;gt;Mode Switches&amp;lt;/b&amp;gt; speak of moving from one medium to another. Specifically moving from the User Space mode to the Kernel mode or Kernel mode to User Space. It does not matter which direction or which modes we are swtiching from, this is simply a general term. Crucial to mode switching is the &amp;lt;b&amp;gt;mode switch time&amp;lt;/b&amp;gt; which is the time necessary to execute a system call instruction in user-mode, perform the kernel mode execution of the system call, and finally return the execution back to user-mode.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Synchronous System Call===&lt;br /&gt;
&amp;lt;b&amp;gt;Synchronous Execution Model(System call Interface)&amp;lt;/b&amp;gt; refers to the structure in which system calls specifically are managed in a serialized manner. Moreover, the synchronous model completes one system call at a time, and does not move onto the next system call until the previous system call is finished executing. This form of system call is blocking, meaning the process which initiates the system call is blocked until the system call returns. Traditionally, operating system calls are mostly synchronous system calls.[1][2]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Asynchronous System Call===&lt;br /&gt;
An &amp;lt;b&amp;gt;asynchronous system call&amp;lt;/b&amp;gt; is a system call which does not block upon invocation; control of execution is returned to the calling process immediately. Asynchronous system calls do not necessarily execute in order and can be compared to event driven programming.[2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Pollution===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Pollution&amp;lt;/b&amp;gt; is a more sophisticated manner of referring to wasteful or un-necessary delay in the system caused by system calls. This pollution is in direct correlation with the fact that the system call invokes a mode switch which is not a costless task. The &amp;quot;pollution&amp;quot; involved takes the form of data over-written in critical processor structures like the TLB (translation look-aside buffer - table which reduces the frequency of main memory access for page table entries), branch prediction tables, and the cache (L1, L2, L3).[1][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Processor Exceptions===&lt;br /&gt;
&amp;lt;b&amp;gt;Processor exceptions&amp;lt;/b&amp;gt; are situations which cause the processor to stop current execution unexpectedly in order to handle the issue. There are many situations which generate processor exceptions including undefined instructions and software interrupts(system calls).[5]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Batching&amp;lt;/b&amp;gt; is the concept of collecting system calls together to be executed in a group instead of executing them immediately after they are called.[6]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Temporal and Spatial Locality===&lt;br /&gt;
Locality is the concept that during execution there will be a tendency for the same set of data to be accessed repeatedly over a brief time period. There are two important forms of locality; &amp;lt;b&amp;gt; spatial locality&amp;lt;/b&amp;gt; and &amp;lt;b&amp;gt;temporal locality&amp;lt;/b&amp;gt;. Spatial locality refers to the pattern that memory locations in close physical proximity will be referenced close together in a short period of time. Temporal locality, on the other hand, is the tendency of recently requested memory locations to be requested again.[7][8]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Instructions Per Cycle (IPC)===&lt;br /&gt;
&amp;lt;b&amp;gt;Instructions per cycle&amp;lt;/b&amp;gt; is the amount of instructions a processor can execute in a single clock cycle.[9]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
will add the following terms.&amp;lt;br&amp;gt;&lt;br /&gt;
TODO-Start --[[User:Tafatah|Tafatah]] 16:13, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I did some of these, some I don&#039;t think I can adequately explain, or just have no idea what they are, so I left them. --[[User:CFaibish|CFaibish]] 00:31, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Translation Look-Aside Buffer (TLB)===&lt;br /&gt;
A TLB is a table used in a virtual memory system that lists the physical address page number associated with each virtual address page number. A TLB is used in conjunction with a cache whose tags are based on virtual addresses. The virtual address is presented simultaneously to the TLB and to the cache so that cache access and the virtual-to-physical address translation can proceed in parallel. If the requested address is not cached then the physical address is used to locate the data in main memory. &lt;br /&gt;
&lt;br /&gt;
The TLB is the reason context switches can have such large performance penalties. Every time the OS switches context, the entire buffer is flushed. When the process resumes, it must be rebuilt from scratch. Too many context switches will therefore cause an increase in cache misses and degrade performance.[17]&lt;br /&gt;
&lt;br /&gt;
===Lack of Locality ===&lt;br /&gt;
As per paper, locality refers to both types of locality, i.e. temporal and spatial, defined above. Thus, lack of locality here means data and instructions needed most frequently by the application continues to be switched back and forth (from registers and caches) due to system calls, attributing hence, to performance degradation.&lt;br /&gt;
&lt;br /&gt;
===Throughput ===&lt;br /&gt;
Is an indication of how much work is done during a unit of time. E.g. n transactions per hour. The higher n is, the better. [2. P151]&lt;br /&gt;
&lt;br /&gt;
===Regular Store Instructions ===&lt;br /&gt;
A store instruction simply refers to a typical assembly language instruction, where, usually, there are two arguments. A value, and a memory location, where that value should be stored.&lt;br /&gt;
&lt;br /&gt;
===Linux Application Binary Interface (ABI)===&lt;br /&gt;
The ABI is a patch to the kernel that allows you to run SCO, Xenix, Solaris ix86, and other binaries on Linux.[18]&lt;br /&gt;
&lt;br /&gt;
===Native POSIX Thread Library (NPTL)===&lt;br /&gt;
NPTL is a software component that allows the Linux kernel to run applications optimized for POSIX Thread efficiency.[19]&lt;br /&gt;
&lt;br /&gt;
===Syscall Page ===&lt;br /&gt;
&lt;br /&gt;
===Syscall Threads ===&lt;br /&gt;
&lt;br /&gt;
===Inter-Process Interrupt ===&lt;br /&gt;
&lt;br /&gt;
===Latency ===&lt;br /&gt;
Latency is a measure of the time delay between the start of an action and its completion in a system.[20]&lt;br /&gt;
&lt;br /&gt;
===Producer-Consumer Problem ===&lt;br /&gt;
Note: explain the relationship &lt;br /&gt;
&lt;br /&gt;
TODO End --[[User:Tafatah|Tafatah]] 16:13, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
System calls provide an interface for user-mode applications to request services from the operating system. Traditionally, the system call interface has been implemented using synchronous system calls, which block the calling user-space process when the system call is initiated. The benefit of using synchronous system calls comes from the easy to program nature of having sequential operation. However, this ease of use also comes with undesireable side effects which can slow down the instructions per cycle (IPC) of the processor.[9] In &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, Soares and Stumm attempt to provide a new form of system call which minimizes the negative effects of synchronous system calls while still remaining easy to implement for application programmers.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The negative effects of synchronous system calls have been researched heavily, it is accepted that although easy to use, they are not optimal. Previous research includes work into &amp;lt;b&amp;gt;system call batching&amp;lt;/b&amp;gt; such as multi-calls[6], &amp;lt;b&amp;gt;locality of execution with multicore systems&amp;lt;/b&amp;gt;[7][8], and &amp;lt;b&amp;gt;non-blocking execution&amp;lt;/b&amp;gt;. System call batching shares great similarity with FlexSC as multiple system calls are grouped together to reduce the amount of mode switches required of the system.[6] The difference is multi-calls do not make use of parallel execution of system calls nor do they manage the blocking aspect of synchronous system calls. FlexSC describes methods to handle both of these situations as described in the &amp;lt;b&amp;gt;Contribution&amp;lt;/b&amp;gt; section of this document.[1] Previous research into locality of execution and multicore systems has focused on managing device interrupts and limiting processor interference associated with interrupt handling.[7][8] However, these solutions require a microkernel solution and although they can dedicate certain execution to specific cores of a system, they can not dynamically adapt to the proportion of cores used by the kernel and the cores shared between the kernel and the user like FlexSC can.[1] Non-blocking execution research has focused on threading, event-based (non-blocking) and hybrid solutions. However, FlexSC provides a mechanism to separate system call execution from system call invocation. This is a key difference between FlexSC and previous research.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
===Exception-Less System Calls===&lt;br /&gt;
Exception-less system calls are the research team&#039;s attempt to provide an alternative to synchronous systems calls. The downside to synchronous system calls includes the cumulative mode switch time of multiple system calls each called independently, state pollution of key processor structures (TLB, cache, etc.)[1][3], and, potentially the most crucial, the performance impact on the user-mode application during a system call. Exception-less system calls attempt to resolve these three issues through:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
1. &amp;lt;u&amp;gt;System Call Batching:&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Instead of having each system call run as soon as it is called, FlexSC instead groups together system calls into batches. These batches can then be executed at one time thus minimizing the     frequency of mode switches bewteen user and kernel modes. Batching provides a benefit both in terms of the direct cost of mode switching as well as the indirect cost, pollution of critical processor structures, associated with switching modes. System call batching works by first requesting as many system calls as possible, then switching to kernel mode, and then executing each of them.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2. &amp;lt;u&amp;gt;Core Specialization&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
On a multi-core system, FlexSC can provide the ability to designate a single core to run all system calls. The reason this is possible is that for an exception-less system call, the system call execution is decoupled from the system call invocation. This is described further in &amp;lt;b&amp;gt;Decoupling Execution from Invocation&amp;lt;/b&amp;gt; section below.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
3. &amp;lt;u&amp;gt;Exception-less System Call Interface&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
To provide an asynchronous interface to the kernel, FlexSC uses &amp;lt;b&amp;gt;syscall pages&amp;lt;/b&amp;gt;. Syscall pages are a set of memory pages shared between user-mode and kernel-mode. User-space threads interact with syscall pages in order to make a request (system call) for kernel-mode procedures. A user-mode thread may make a system call request on a free entry of a syscall page, the syscall page will then run once the batch condition is met and store the return value on the syscall page. The user-mode thread can then return to the syscall page to obtain the return value. Neither issuing the system call via the syscall page nor getting the return value from the syscall page generate a processor exception. Each syscall page is a table of syscall entries. These entries may have one of three states: &amp;lt;b&amp;gt;Free&amp;lt;/b&amp;gt; - meaning a syscall can be added to the entry; &amp;lt;b&amp;gt;Submitted&amp;lt;/b&amp;gt; - meaning the kernel can proceed to invoke the appropriate system call operations; and &amp;lt;b&amp;gt;Done&amp;lt;/b&amp;gt; - meaning the kernel is finished and the return value is ready for the user-mode thread to retrieve it.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
4. &amp;lt;u&amp;gt;Decoupling Execution from Invocation&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In order to separate a system call invocation from the execution of the system call, &amp;lt;b&amp;gt;syscall threads&amp;lt;/b&amp;gt; were created. The sole purpose of syscall threads is to pull requests from syscall pages and execute the request, always in kernel mode. This is the mechanic that allows exception-less system calls to provide the ability for a user-mode thread to issue a request and continue to run while the kernel level system call is being executed. In addition, since the system call invocation is separate from execution, a process running on one core may request a system call yet the execution of the system call may be completed on an entirely different core. This allows exception-less system calls the unique capability of having all system call execution delegated to a specific core while other cores maintain user-mode execution.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===FlexSC Threads===&lt;br /&gt;
As mentioned above, FlexSC threads are a key component of the exception-less system call interface. FlexSC threads transform regular, synchronous system calls into exception-less system calls and are compatible with both the POSIX and default Linux thread libraries. This means that FlexSC Threads are immediately capable of running multi-threaded Linux applications with no modifications. The intended use of these threads is with server-type applications which contain many user-mode threads. In order to accomodate multiple user-mode threads, the FlexSC interface provides a syscall page for each core of a system. In this manner, multiple user-mode threads can be multiplexed onto a single syscall page which in turn has a single kernel level thread to facilitate execution of the system calls. Programming with FlexSC threads can be compared to event-driven programming as interactions are not guaranteed to be sequential. This does increase the complexity of programming for an exception-less system call interface as compared to the relatively simple synchronous system call interface.[1][2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Critique: ==&lt;br /&gt;
&lt;br /&gt;
===Moore&#039;s Law===&lt;br /&gt;
One interesting aspect of this paper is how the research relates to Moore&#039;s Law. Moore&#039;s Law states that the number of transistors on a chip doubles every 18 months.[10]. This has lead to very large increases in the performance potential of software but at the same time has opened a large gap between the actual performance of efficient and inefficient software. This paper claims that the gap is mainly caused by disparity of accessing different processor resources such as registers, cache and memory.[1] In this manner, the FlexSC interface is not just an attempt to increase the efficiency of current system calls, but it is actually an attempt to change the way we view software. It is not simply enough to continue to build more powerful machines if the code we currently run will not speed up (become more efficient) along with the gain of power. Instead we need to focus on appropriate allocation and usage of the power as failure to do so is the origination of the gap between our potential and our performance.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Performance of FlexSC===&lt;br /&gt;
It is of particular interest to note that exception-less system calls only outperformed synchronous system calls when the system was running multiple system calls. For an individual system call, the overhead of the FlexSC interface was greater than a synchronous call. The real benefit of FlexSC comes when there are many system calls which can be in turn batched before execution. In this situation the FlexSC system far outperformed the traditional synchronous system calls.[1] This is why the research paper&#039;s focus is on server-like applications as server must handle many user requests efficiently to be useful. Thus, for a general case it appears that a hybrid solution of synchronous calls below some threshold and execption-less system calls above the same threshold would be most efficient.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Blocking Calls===&lt;br /&gt;
FlexSC relies on the fact that web and database servers have a lot of concurrency and independent parallelism. FlexSC can &#039;harvest&#039; enough independent work so that it doesn&#039;t need to track dependencies between system calls. However, this could be a problem in other situations. Since FlexSC system calls are &#039;inherently asynchronous&#039;, if they need to block, FlexSC would jump to the next system call and execute that one. This can cause a problem for system calls such as reading and writing, where the write call has an outstanding dependency on the read call. However, this could be resolved by using some kind of combined system call, that is, multiple system calls executed as one single call. Unfortunately, FlexSC does not have any current handling for such an implementation.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Core Scheduling Issues===&lt;br /&gt;
In a system with X cores, FlexSC needs to dedicate some subset of cores for system calls. Currently, FlexSC first wakes up core X to run a system call thread, and when another batch comes in, if core X is still busy, it will then try core X-1, and so on. Of all the algorithms they tested, it turned out that this, the simplest algorithm, was the most efficient algorithm for FlexSC scheduling. However, this was only tested with FlexSC running a single application at a time. FlexSC&#039;s scheduling algorithm would need to be fine-tuned for running multiple applications.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When There Are Not More Threads Then Cores===&lt;br /&gt;
In situations where there is a single thread using 100% of a CPU, and acting primarily in user-space, such as &#039;Scientific Programs&#039;, FlexSC causes more overhead then performance gains. As a result, FlexSC is not an optimal implementation for cases such as this.&lt;br /&gt;
&lt;br /&gt;
===IO === &lt;br /&gt;
FlexSC is not suited for data intensive, IO centric applications, as realized by Vijay Vasudevan [16]. Vijay&#039;s research aims to reduce the energy footprint in data centers. FlexSC&amp;lt;br&amp;gt;&lt;br /&gt;
was considered. It was found that FlexSC&#039;s reduction of mode switches, via the use of shared memory pages between user space and kernel space is useful for reducing the impact&amp;lt;br&amp;gt;&lt;br /&gt;
of system calls. That technique however was not useful for IO intensive work since it did not remove the requirement of data copying and did not reduce the overheads associated&amp;lt;br&amp;gt;&lt;br /&gt;
with interrupts in IO intensive tasks.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Some Kernel Changes Are Required===&lt;br /&gt;
Though most of the work is done transparently. i.e. there is no need for application&#039;s code modification, there remains a need for small kernel change (3 lines of code), as per section 3.2 of the paper [1].&amp;lt;br&amp;gt;&lt;br /&gt;
That means adopters, and after each update of the kernel, would have to add/modify the referenced lines and then recompile the kernel.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Multicore Systems ===&lt;br /&gt;
For a multicore system, the FlexSC scheduler will attempt to choose a subset of the available cores and specialize them for running system call threads. It is unclear how the dynamic&amp;lt;br&amp;gt;&lt;br /&gt;
allocation is done. It is mentioned that decisions are made based on the workload requirements, which doesn&#039;t exactly clarify the mechanism.&amp;lt;br&amp;gt;&lt;br /&gt;
Further, the paper mentions that a predefined, static list of cores is used for system call threads assignments. It is unclear when that list is created. Is it at installation time,&amp;lt;br&amp;gt;&lt;br /&gt;
is it generated initially, or does the installer have to do any manual work. On a related note, scalability with increased cores is ambiguous. It is not that clear how scalable the&amp;lt;br&amp;gt;&lt;br /&gt;
scheduler is. One gets the impression that it is very scalable due to the fact that each core spawns a system call thread. Thus, as many threads as there are cores could be running&amp;lt;br&amp;gt;&lt;br /&gt;
concurrently, for one or more processes [1]. More explicit results however would&#039;ve been beneficial. Further, the paper mentions that hyper-threading was turned off to ease the analysis&amp;lt;br&amp;gt;&lt;br /&gt;
of the results. Understandable, however, it would be nice to know if these threads (2 per core) would actually be treated as a core when turned on ? I.e. would the scheduler then realize&amp;lt;br&amp;gt;&lt;br /&gt;
that it can use eight cores ? Does that also mean the predefined static cores list would need to be modified, to list eight instead of four ?&amp;lt;br&amp;gt;  &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
Along the same reasoning, and given the growing popularity of GPU&#039;s use for general programming, it would&#039;ve been useful to at-least hypothesize on the possible performance&amp;lt;br&amp;gt;&lt;br /&gt;
outcome when using specialized GPUs, like NVIDIA&#039;s Tesla GPUs for example. Would FlexSC&#039;s scheduler be able to take advantage of the additional cores, and hence use them for&amp;lt;br&amp;gt;&lt;br /&gt;
specialized purposes ?&lt;br /&gt;
&lt;br /&gt;
== Related Work: ==&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&lt;br /&gt;
Muti-calls is a concept which involves collecting multiple system calls and submitting them as a single system call. It is used both in operating systems and paravirtualized hypervisors. The Cassyopia compiler has a special technique name a looped multi-call, which is an additional process where the result of one system call can be fed as an argument to another system call in the same multi-call.[11] There is a significant difference between multi-calls and exception-less system calls. Multi-calls do not investigate parallel execution of system calls, nor do they address the blocking of system calls like exception-less system calls. Multi-call system calls are executed sequentially, each one must complete before the next may start. On the other hand, exception-less system calls can be executed in parallel, and in the presence of blocking, the next call can execute immediately.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Locality of Execution and Multicores===&lt;br /&gt;
&lt;br /&gt;
Several techniques addressed the issue of locality of execution. Larus and Parkes proposed Cohort Scheduling to efficiently execute staged computations.[12] Other techniques include Soft Timers[13] and Lazy Receiver[14] Processing which try to tackle the issue of locality of execution by handling device interrupts. They both try to limit processor interference associated with interrupt handling without affecting the latency of servicing requests. Another technique name Computation Spreading[15] is most similar to the multicore execution of FlexSC. Processor modifications that allow hardware migration of threads and migration to specialized cores. However, they did not model TLBs on current hardware synchronous thread migration is a costly interprocessor interrupt. Another solution has 2 difference between FlexSC. They require a micro-kernel. Also FlexSC can dynamically adapt the proportion of cores used by the kernel or cores shared by user and kernel execution dynamically. While all these solutions rely on expensive inter-processor interrupts to offload system calls, FlexSC could provide a more efficient, and flexible mechanism.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Non-blocking Execution===&lt;br /&gt;
&lt;br /&gt;
Past research on improving system call performance has focused extensively on blocking versus non-blocking behavior. Typically researchers used threading, event-based, which is non-blocking and hybrid systems to obtain high performance on server applications. The main difference between many of the proposals for non-blocking execution and FlexSC is that none of the non-blocking system calls have decoupled the system call invocation from its execution.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References: ==&lt;br /&gt;
[1] Soares, Livio and Michael Stumm, &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, University of Toronto, 2010.[http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Tanenbaum, Andrew S., &amp;lt;i&amp;gt;Modern Operating Systems: 3rd Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2008.&lt;br /&gt;
&lt;br /&gt;
[3] Stallings, William, &amp;lt;i&amp;gt;Operating Systems: Internals and Design Principles - 6th Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2009.&lt;br /&gt;
&lt;br /&gt;
[4] Garfinkel, Tim, &amp;lt;i&amp;gt;Traps and Pitfalls: Practical Problems in System Call Interposition Based Security Tools&amp;lt;/i&amp;gt;, Computer Science Department - Stanford University.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.144.2695&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[5] Yoo, Sunjoo &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Automatic Generation of Fast Timed Simulation Models for Operating Systems in SoC Design&amp;lt;/i&amp;gt;, SLS Group, TIMA Laboratory, Grenoble, 2002.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.1148&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[6] Rajagopalan, Mohan &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Cassyopia: Compiler Assisted System Optimization&amp;lt;/i&amp;gt;, Poceedings of HotOS IX: The 9th Workshop on Hot Topics in Operating Systems, Lihue, Hawaii, 2003.[https://www.usenix.org/events/hotos03/tech/full_papers/rajagopalan/rajagopalan.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[7] Kumar, Sanjeev and Christopher Wilkerson, &amp;lt;i&amp;gt;Exploiting Spatial Locality in Data Caches using Spatial Footprints&amp;lt;/i&amp;gt;, Princeton University and Microcomputer Research Labs (Oregon), 1998.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.1550&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[8] Jin, Shudong and Azer Bestavros, &amp;lt;i&amp;gt;Sources and Characteristics of Web Temporal Locality&amp;lt;/i&amp;gt;, Computer Science Depratment - Boston University, Boston. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.94.5941&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[9] Agarwal, Vikas &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Clock Rate versus IPS: The End of the Road for Conventional Microarhitechtures&amp;lt;/i&amp;gt;, University of Texas, Austin, 2000.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.3694&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[10] Tuomi, Ilkka, &amp;lt;i&amp;gt;The Lives and Death of Moore&#039;s Law&amp;lt;/i&amp;gt;, 2002.[http://131.193.153.231/www/issues/issue7_11/tuomi/ HTML]&lt;br /&gt;
&lt;br /&gt;
[11] BARHAM, P., DRAGOVIC, B., FRASER, K., HAND, S., HARRIS, T., HO, A., NEUGEBAUER, R., PRATT, I., AND WARFIELD, A. Xen and the art of virtualization. In Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP) (2003), pp. 164–177.&lt;br /&gt;
&lt;br /&gt;
[12] LARUS, J., AND PARKES, M. Using Cohort-Scheduling to Enhance Server Performance. In Proceedings of the annual conference on USENIX Annual Technical Conference (ATEC) (2002), pp. 103–114.&lt;br /&gt;
&lt;br /&gt;
[13] ARON, M., AND DRUSCHEL, P. Soft timers: efficient microsecond software timer support for network processing. ACM Trans. Comput. Syst. (TOCS) 18, 3 (2000), 197–228.&lt;br /&gt;
&lt;br /&gt;
[14] DRUSCHEL, P., AND BANGA, G. Lazy receiver processing (LRP): a network subsystem architecture for server systems. In Proceedings of the 2nd USENIX Symposium on Operating Systems Design and Implementation (OSDI) (1996), pp. 261–275.&lt;br /&gt;
&lt;br /&gt;
[15] CHAKRABORTY, K., WELLS, P. M., AND SOHI, G. S. Computation Spreading: Employing Hardware Migration to Specialize CMP Cores On-the-fly. In Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) (2006), pp. 283–292.&lt;br /&gt;
&lt;br /&gt;
[16] Vasudevan, Vijay. &amp;lt;i&amp;gt;Improving Datacenter Energy Efficiency Using a Fast Array of Wimpy Nodes&amp;lt;/i&amp;gt;, Thesis Proposal, Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, October 12, 2010.[http://www.cs.cmu.edu/~vrv/proposal/vijay_thesis_proposal.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[17] Patricia J. Teller &amp;lt;i&amp;gt;Translation-Lookaside Buffer Consistency&amp;lt;/i&amp;gt;, Journal Volume 23 Issue 6, IBM T. J. Watson Research Center, Yorktown Heights, NY, June 1990. [http://dx.doi.org/10.1109/2.55498 HTML]&lt;br /&gt;
&lt;br /&gt;
[18] Linux ABI sourceforge page. [http://linux-abi.sourceforge.net/ HTML] and Linux application page. [http://www.linux.org/apps/AppId_8088.html HTML]&lt;br /&gt;
&lt;br /&gt;
[19] DREPPER, U., AND MOLNAR , I. &amp;lt;i&amp;gt;The Native POSIX Thread Library for Linux&amp;lt;/i&amp;gt;. Tech. rep., RedHat Inc, 2003. [http://people.redhat.com/drepper/nptl-design.pdf HTML]&lt;br /&gt;
&lt;br /&gt;
[20] M. Brian Blake, &amp;lt;i&amp;gt;Coordinating Multiple Agents for Workflow-Oriented Process Orchestration&amp;lt;/i&amp;gt;. Information Systems and e-Business Management Journal, Springer-Verlag, December 2003. [http://www.cs.georgetown.edu/~blakeb/pubs/blake_ISEB2003.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[21] DeveloperWorks, Kernel Command using Linux System Calls IBM,2010.[http://www.ibm.com/developerworks/linux/library/l-system-calls/ ]&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6197</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6197"/>
		<updated>2010-12-02T05:21:43Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Background Concepts: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay. It is essential for one to comprehend the basic concepts of the language that is spoken in the paper in order to fully understand the ideas that are being discussed. The most important notions discussed in the FlexSC paper that are at the core of it all, are System calls[21], and Sychronous systems. These base definitons along with numerous other helpful ideas can be understood through the section of the paper to follow. &lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
In order to fully understand the FlexSC paper, it is essential to understand the key concepts that are discussed within the paper. Here listed below, are the main concepts required to fully comprehend the paper. It is more vital to the reader to understand the core ideas of these defintions along with the underlying motivation for there existance, then to understand the miniscule details of their processes. &lt;br /&gt;
&lt;br /&gt;
===System Call===&lt;br /&gt;
A &amp;lt;b&amp;gt;System Call&amp;lt;/b&amp;gt; is the gateway between the User Space and the Kernel Space. The User Space is not given direct access to the Kernel&#039;s services, for several reasons (one being security), hence System calls are the messengers between the User and Kernel Space.[1][4]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode Switch===&lt;br /&gt;
&amp;lt;b&amp;gt;Mode Switches&amp;lt;/b&amp;gt; speak of moving from one medium to another. Specifically moving from the User Space mode to the Kernel mode or Kernel mode to User Space. It does not matter which direction or which modes we are swtiching from, this is simply a general term. Crucial to mode switching is the &amp;lt;b&amp;gt;mode switch time&amp;lt;/b&amp;gt; which is the time necessary to execute a system call instruction in user-mode, perform the kernel mode execution of the system call, and finally return the execution back to user-mode.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Synchronous System Call===&lt;br /&gt;
&amp;lt;b&amp;gt;Synchronous Execution Model(System call Interface)&amp;lt;/b&amp;gt; refers to the structure in which system calls specifically are managed in a serialized manner. Moreover, the synchronous model completes one system call at a time, and does not move onto the next system call until the previous system call is finished executing. This form of system call is blocking, meaning the process which initiates the system call is blocked until the system call returns. Traditionally, operating system calls are mostly synchronous system calls.[1][2]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Asynchronous System Call===&lt;br /&gt;
An &amp;lt;b&amp;gt;asynchronous system call&amp;lt;/b&amp;gt; is a system call which does not block upon invocation; control of execution is returned to the calling process immediately. Asynchronous system calls do not necessarily execute in order and can be compared to event driven programming.[2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Pollution===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Pollution&amp;lt;/b&amp;gt; is a more sophisticated manner of referring to wasteful or un-necessary delay in the system caused by system calls. This pollution is in direct correlation with the fact that the system call invokes a mode switch which is not a costless task. The &amp;quot;pollution&amp;quot; involved takes the form of data over-written in critical processor structures like the TLB (translation look-aside buffer - table which reduces the frequency of main memory access for page table entries), branch prediction tables, and the cache (L1, L2, L3).[1][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Processor Exceptions===&lt;br /&gt;
&amp;lt;b&amp;gt;Processor exceptions&amp;lt;/b&amp;gt; are situations which cause the processor to stop current execution unexpectedly in order to handle the issue. There are many situations which generate processor exceptions including undefined instructions and software interrupts(system calls).[5]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Batching&amp;lt;/b&amp;gt; is the concept of collecting system calls together to be executed in a group instead of executing them immediately after they are called.[6]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Temporal and Spatial Locality===&lt;br /&gt;
Locality is the concept that during execution there will be a tendency for the same set of data to be accessed repeatedly over a brief time period. There are two important forms of locality; &amp;lt;b&amp;gt; spatial locality&amp;lt;/b&amp;gt; and &amp;lt;b&amp;gt;temporal locality&amp;lt;/b&amp;gt;. Spatial locality refers to the pattern that memory locations in close physical proximity will be referenced close together in a short period of time. Temporal locality, on the other hand, is the tendency of recently requested memory locations to be requested again.[7][8]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Instructions Per Cycle (IPC)===&lt;br /&gt;
&amp;lt;b&amp;gt;Instructions per cycle&amp;lt;/b&amp;gt; is the amount of instructions a processor can execute in a single clock cycle.[9]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
will add the following terms.&amp;lt;br&amp;gt;&lt;br /&gt;
TODO-Start --[[User:Tafatah|Tafatah]] 16:13, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I did some of these, some I don&#039;t think I can adequately explain, or just have no idea what they are, so I left them. --[[User:CFaibish|CFaibish]] 00:31, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Translation Look-Aside Buffer (TLB)===&lt;br /&gt;
A TLB is a table used in a virtual memory system that lists the physical address page number associated with each virtual address page number. A TLB is used in conjunction with a cache whose tags are based on virtual addresses. The virtual address is presented simultaneously to the TLB and to the cache so that cache access and the virtual-to-physical address translation can proceed in parallel. If the requested address is not cached then the physical address is used to locate the data in main memory. &lt;br /&gt;
&lt;br /&gt;
The TLB is the reason context switches can have such large performance penalties. Every time the OS switches context, the entire buffer is flushed. When the process resumes, it must be rebuilt from scratch. Too many context switches will therefore cause an increase in cache misses and degrade performance.[17]&lt;br /&gt;
&lt;br /&gt;
===Lack of Locality ===&lt;br /&gt;
As per paper, locality refers to both types of locality, i.e. temporal and spatial, defined above. Thus, lack of locality here means data and instructions needed most frequently by the application continues to be switched back and forth (from registers and caches) due to system calls, attributing hence, to performance degradation.&lt;br /&gt;
&lt;br /&gt;
===Throughput ===&lt;br /&gt;
Is an indication of how much work is done during a unit of time. E.g. n transactions per hour. The higher n is, the better. [2. P151]&lt;br /&gt;
&lt;br /&gt;
===Regular Store Instructions ===&lt;br /&gt;
A store instruction simply refers to a typical assembly language instruction, where, usually, there are two arguments. A value, and a memory location, where that value should be stored.&lt;br /&gt;
&lt;br /&gt;
===Linux Application Binary Interface (ABI)===&lt;br /&gt;
The ABI is a patch to the kernel that allows you to run SCO, Xenix, Solaris ix86, and other binaries on Linux.[18]&lt;br /&gt;
&lt;br /&gt;
===Native POSIX Thread Library (NPTL)===&lt;br /&gt;
NPTL is a software component that allows the Linux kernel to run applications optimized for POSIX Thread efficiency.[19]&lt;br /&gt;
&lt;br /&gt;
===Syscall Page ===&lt;br /&gt;
The Syscall Page refers to the data belonging to the man Page that describes the syscall() function and how it runs the system call who has been specified by the function call. &lt;br /&gt;
&lt;br /&gt;
===Syscall Threads ===&lt;br /&gt;
&lt;br /&gt;
===Inter-Process Interrupt ===&lt;br /&gt;
&lt;br /&gt;
===Latency ===&lt;br /&gt;
Latency is a measure of the time delay between the start of an action and its completion in a system.[20]&lt;br /&gt;
&lt;br /&gt;
===Producer-Consumer Problem ===&lt;br /&gt;
Note: explain the relationship &lt;br /&gt;
&lt;br /&gt;
TODO End --[[User:Tafatah|Tafatah]] 16:13, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
System calls provide an interface for user-mode applications to request services from the operating system. Traditionally, the system call interface has been implemented using synchronous system calls, which block the calling user-space process when the system call is initiated. The benefit of using synchronous system calls comes from the easy to program nature of having sequential operation. However, this ease of use also comes with undesireable side effects which can slow down the instructions per cycle (IPC) of the processor.[9] In &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, Soares and Stumm attempt to provide a new form of system call which minimizes the negative effects of synchronous system calls while still remaining easy to implement for application programmers.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The negative effects of synchronous system calls have been researched heavily, it is accepted that although easy to use, they are not optimal. Previous research includes work into &amp;lt;b&amp;gt;system call batching&amp;lt;/b&amp;gt; such as multi-calls[6], &amp;lt;b&amp;gt;locality of execution with multicore systems&amp;lt;/b&amp;gt;[7][8], and &amp;lt;b&amp;gt;non-blocking execution&amp;lt;/b&amp;gt;. System call batching shares great similarity with FlexSC as multiple system calls are grouped together to reduce the amount of mode switches required of the system.[6] The difference is multi-calls do not make use of parallel execution of system calls nor do they manage the blocking aspect of synchronous system calls. FlexSC describes methods to handle both of these situations as described in the &amp;lt;b&amp;gt;Contribution&amp;lt;/b&amp;gt; section of this document.[1] Previous research into locality of execution and multicore systems has focused on managing device interrupts and limiting processor interference associated with interrupt handling.[7][8] However, these solutions require a microkernel solution and although they can dedicate certain execution to specific cores of a system, they can not dynamically adapt to the proportion of cores used by the kernel and the cores shared between the kernel and the user like FlexSC can.[1] Non-blocking execution research has focused on threading, event-based (non-blocking) and hybrid solutions. However, FlexSC provides a mechanism to separate system call execution from system call invocation. This is a key difference between FlexSC and previous research.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
===Exception-Less System Calls===&lt;br /&gt;
Exception-less system calls are the research team&#039;s attempt to provide an alternative to synchronous systems calls. The downside to synchronous system calls includes the cumulative mode switch time of multiple system calls each called independently, state pollution of key processor structures (TLB, cache, etc.)[1][3], and, potentially the most crucial, the performance impact on the user-mode application during a system call. Exception-less system calls attempt to resolve these three issues through:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
1. &amp;lt;u&amp;gt;System Call Batching:&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Instead of having each system call run as soon as it is called, FlexSC instead groups together system calls into batches. These batches can then be executed at one time thus minimizing the     frequency of mode switches bewteen user and kernel modes. Batching provides a benefit both in terms of the direct cost of mode switching as well as the indirect cost, pollution of critical processor structures, associated with switching modes. System call batching works by first requesting as many system calls as possible, then switching to kernel mode, and then executing each of them.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2. &amp;lt;u&amp;gt;Core Specialization&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
On a multi-core system, FlexSC can provide the ability to designate a single core to run all system calls. The reason this is possible is that for an exception-less system call, the system call execution is decoupled from the system call invocation. This is described further in &amp;lt;b&amp;gt;Decoupling Execution from Invocation&amp;lt;/b&amp;gt; section below.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
3. &amp;lt;u&amp;gt;Exception-less System Call Interface&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
To provide an asynchronous interface to the kernel, FlexSC uses &amp;lt;b&amp;gt;syscall pages&amp;lt;/b&amp;gt;. Syscall pages are a set of memory pages shared between user-mode and kernel-mode. User-space threads interact with syscall pages in order to make a request (system call) for kernel-mode procedures. A user-mode thread may make a system call request on a free entry of a syscall page, the syscall page will then run once the batch condition is met and store the return value on the syscall page. The user-mode thread can then return to the syscall page to obtain the return value. Neither issuing the system call via the syscall page nor getting the return value from the syscall page generate a processor exception. Each syscall page is a table of syscall entries. These entries may have one of three states: &amp;lt;b&amp;gt;Free&amp;lt;/b&amp;gt; - meaning a syscall can be added to the entry; &amp;lt;b&amp;gt;Submitted&amp;lt;/b&amp;gt; - meaning the kernel can proceed to invoke the appropriate system call operations; and &amp;lt;b&amp;gt;Done&amp;lt;/b&amp;gt; - meaning the kernel is finished and the return value is ready for the user-mode thread to retrieve it.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
4. &amp;lt;u&amp;gt;Decoupling Execution from Invocation&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In order to separate a system call invocation from the execution of the system call, &amp;lt;b&amp;gt;syscall threads&amp;lt;/b&amp;gt; were created. The sole purpose of syscall threads is to pull requests from syscall pages and execute the request, always in kernel mode. This is the mechanic that allows exception-less system calls to provide the ability for a user-mode thread to issue a request and continue to run while the kernel level system call is being executed. In addition, since the system call invocation is separate from execution, a process running on one core may request a system call yet the execution of the system call may be completed on an entirely different core. This allows exception-less system calls the unique capability of having all system call execution delegated to a specific core while other cores maintain user-mode execution.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===FlexSC Threads===&lt;br /&gt;
As mentioned above, FlexSC threads are a key component of the exception-less system call interface. FlexSC threads transform regular, synchronous system calls into exception-less system calls and are compatible with both the POSIX and default Linux thread libraries. This means that FlexSC Threads are immediately capable of running multi-threaded Linux applications with no modifications. The intended use of these threads is with server-type applications which contain many user-mode threads. In order to accomodate multiple user-mode threads, the FlexSC interface provides a syscall page for each core of a system. In this manner, multiple user-mode threads can be multiplexed onto a single syscall page which in turn has a single kernel level thread to facilitate execution of the system calls. Programming with FlexSC threads can be compared to event-driven programming as interactions are not guaranteed to be sequential. This does increase the complexity of programming for an exception-less system call interface as compared to the relatively simple synchronous system call interface.[1][2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Critique: ==&lt;br /&gt;
&lt;br /&gt;
===Moore&#039;s Law===&lt;br /&gt;
One interesting aspect of this paper is how the research relates to Moore&#039;s Law. Moore&#039;s Law states that the number of transistors on a chip doubles every 18 months.[10]. This has lead to very large increases in the performance potential of software but at the same time has opened a large gap between the actual performance of efficient and inefficient software. This paper claims that the gap is mainly caused by disparity of accessing different processor resources such as registers, cache and memory.[1] In this manner, the FlexSC interface is not just an attempt to increase the efficiency of current system calls, but it is actually an attempt to change the way we view software. It is not simply enough to continue to build more powerful machines if the code we currently run will not speed up (become more efficient) along with the gain of power. Instead we need to focus on appropriate allocation and usage of the power as failure to do so is the origination of the gap between our potential and our performance.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Performance of FlexSC===&lt;br /&gt;
It is of particular interest to note that exception-less system calls only outperformed synchronous system calls when the system was running multiple system calls. For an individual system call, the overhead of the FlexSC interface was greater than a synchronous call. The real benefit of FlexSC comes when there are many system calls which can be in turn batched before execution. In this situation the FlexSC system far outperformed the traditional synchronous system calls.[1] This is why the research paper&#039;s focus is on server-like applications as server must handle many user requests efficiently to be useful. Thus, for a general case it appears that a hybrid solution of synchronous calls below some threshold and execption-less system calls above the same threshold would be most efficient.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Blocking Calls===&lt;br /&gt;
FlexSC relies on the fact that web and database servers have a lot of concurrency and independent parallelism. FlexSC can &#039;harvest&#039; enough independent work so that it doesn&#039;t need to track dependencies between system calls. However, this could be a problem in other situations. Since FlexSC system calls are &#039;inherently asynchronous&#039;, if they need to block, FlexSC would jump to the next system call and execute that one. This can cause a problem for system calls such as reading and writing, where the write call has an outstanding dependency on the read call. However, this could be resolved by using some kind of combined system call, that is, multiple system calls executed as one single call. Unfortunately, FlexSC does not have any current handling for such an implementation.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Core Scheduling Issues===&lt;br /&gt;
In a system with X cores, FlexSC needs to dedicate some subset of cores for system calls. Currently, FlexSC first wakes up core X to run a system call thread, and when another batch comes in, if core X is still busy, it will then try core X-1, and so on. Of all the algorithms they tested, it turned out that this, the simplest algorithm, was the most efficient algorithm for FlexSC scheduling. However, this was only tested with FlexSC running a single application at a time. FlexSC&#039;s scheduling algorithm would need to be fine-tuned for running multiple applications.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When There Are Not More Threads Then Cores===&lt;br /&gt;
In situations where there is a single thread using 100% of a CPU, and acting primarily in user-space, such as &#039;Scientific Programs&#039;, FlexSC causes more overhead then performance gains. As a result, FlexSC is not an optimal implementation for cases such as this.&lt;br /&gt;
&lt;br /&gt;
===IO === &lt;br /&gt;
FlexSC is not suited for data intensive, IO centric applications, as realized by Vijay Vasudevan [16]. Vijay&#039;s research aims to reduce the energy footprint in data centers. FlexSC&amp;lt;br&amp;gt;&lt;br /&gt;
was considered. It was found that FlexSC&#039;s reduction of mode switches, via the use of shared memory pages between user space and kernel space is useful for reducing the impact&amp;lt;br&amp;gt;&lt;br /&gt;
of system calls. That technique however was not useful for IO intensive work since it did not remove the requirement of data copying and did not reduce the overheads associated&amp;lt;br&amp;gt;&lt;br /&gt;
with interrupts in IO intensive tasks.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Some Kernel Changes Are Required===&lt;br /&gt;
Though most of the work is done transparently. i.e. there is no need for application&#039;s code modification, there remains a need for small kernel change (3 lines of code), as per section 3.2 of the paper [1].&amp;lt;br&amp;gt;&lt;br /&gt;
That means adopters, and after each update of the kernel, would have to add/modify the referenced lines and then recompile the kernel.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Multicore Systems ===&lt;br /&gt;
For a multicore system, the FlexSC scheduler will attempt to choose a subset of the available cores and specialize them for running system call threads. It is unclear how the dynamic&amp;lt;br&amp;gt;&lt;br /&gt;
allocation is done. It is mentioned that decisions are made based on the workload requirements, which doesn&#039;t exactly clarify the mechanism.&amp;lt;br&amp;gt;&lt;br /&gt;
Further, the paper mentions that a predefined, static list of cores is used for system call threads assignments. It is unclear when that list is created. Is it at installation time,&amp;lt;br&amp;gt;&lt;br /&gt;
is it generated initially, or does the installer have to do any manual work. On a related note, scalability with increased cores is ambiguous. It is not that clear how scalable the&amp;lt;br&amp;gt;&lt;br /&gt;
scheduler is. One gets the impression that it is very scalable due to the fact that each core spawns a system call thread. Thus, as many threads as there are cores could be running&amp;lt;br&amp;gt;&lt;br /&gt;
concurrently, for one or more processes [1]. More explicit results however would&#039;ve been beneficial. Further, the paper mentions that hyper-threading was turned off to ease the analysis&amp;lt;br&amp;gt;&lt;br /&gt;
of the results. Understandable, however, it would be nice to know if these threads (2 per core) would actually be treated as a core when turned on ? I.e. would the scheduler then realize&amp;lt;br&amp;gt;&lt;br /&gt;
that it can use eight cores ? Does that also mean the predefined static cores list would need to be modified, to list eight instead of four ?&amp;lt;br&amp;gt;  &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
Along the same reasoning, and given the growing popularity of GPU&#039;s use for general programming, it would&#039;ve been useful to at-least hypothesize on the possible performance&amp;lt;br&amp;gt;&lt;br /&gt;
outcome when using specialized GPUs, like NVIDIA&#039;s Tesla GPUs for example. Would FlexSC&#039;s scheduler be able to take advantage of the additional cores, and hence use them for&amp;lt;br&amp;gt;&lt;br /&gt;
specialized purposes ?&lt;br /&gt;
&lt;br /&gt;
== Related Work: ==&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&lt;br /&gt;
Muti-calls is a concept which involves collecting multiple system calls and submitting them as a single system call. It is used both in operating systems and paravirtualized hypervisors. The Cassyopia compiler has a special technique name a looped multi-call, which is an additional process where the result of one system call can be fed as an argument to another system call in the same multi-call.[11] There is a significant difference between multi-calls and exception-less system calls. Multi-calls do not investigate parallel execution of system calls, nor do they address the blocking of system calls like exception-less system calls. Multi-call system calls are executed sequentially, each one must complete before the next may start. On the other hand, exception-less system calls can be executed in parallel, and in the presence of blocking, the next call can execute immediately.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Locality of Execution and Multicores===&lt;br /&gt;
&lt;br /&gt;
Several techniques addressed the issue of locality of execution. Larus and Parkes proposed Cohort Scheduling to efficiently execute staged computations.[12] Other techniques include Soft Timers[13] and Lazy Receiver[14] Processing which try to tackle the issue of locality of execution by handling device interrupts. They both try to limit processor interference associated with interrupt handling without affecting the latency of servicing requests. Another technique name Computation Spreading[15] is most similar to the multicore execution of FlexSC. Processor modifications that allow hardware migration of threads and migration to specialized cores. However, they did not model TLBs on current hardware synchronous thread migration is a costly interprocessor interrupt. Another solution has 2 difference between FlexSC. They require a micro-kernel. Also FlexSC can dynamically adapt the proportion of cores used by the kernel or cores shared by user and kernel execution dynamically. While all these solutions rely on expensive inter-processor interrupts to offload system calls, FlexSC could provide a more efficient, and flexible mechanism.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Non-blocking Execution===&lt;br /&gt;
&lt;br /&gt;
Past research on improving system call performance has focused extensively on blocking versus non-blocking behavior. Typically researchers used threading, event-based, which is non-blocking and hybrid systems to obtain high performance on server applications. The main difference between many of the proposals for non-blocking execution and FlexSC is that none of the non-blocking system calls have decoupled the system call invocation from its execution.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References: ==&lt;br /&gt;
[1] Soares, Livio and Michael Stumm, &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, University of Toronto, 2010.[http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Tanenbaum, Andrew S., &amp;lt;i&amp;gt;Modern Operating Systems: 3rd Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2008.&lt;br /&gt;
&lt;br /&gt;
[3] Stallings, William, &amp;lt;i&amp;gt;Operating Systems: Internals and Design Principles - 6th Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2009.&lt;br /&gt;
&lt;br /&gt;
[4] Garfinkel, Tim, &amp;lt;i&amp;gt;Traps and Pitfalls: Practical Problems in System Call Interposition Based Security Tools&amp;lt;/i&amp;gt;, Computer Science Department - Stanford University.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.144.2695&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[5] Yoo, Sunjoo &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Automatic Generation of Fast Timed Simulation Models for Operating Systems in SoC Design&amp;lt;/i&amp;gt;, SLS Group, TIMA Laboratory, Grenoble, 2002.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.1148&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[6] Rajagopalan, Mohan &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Cassyopia: Compiler Assisted System Optimization&amp;lt;/i&amp;gt;, Poceedings of HotOS IX: The 9th Workshop on Hot Topics in Operating Systems, Lihue, Hawaii, 2003.[https://www.usenix.org/events/hotos03/tech/full_papers/rajagopalan/rajagopalan.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[7] Kumar, Sanjeev and Christopher Wilkerson, &amp;lt;i&amp;gt;Exploiting Spatial Locality in Data Caches using Spatial Footprints&amp;lt;/i&amp;gt;, Princeton University and Microcomputer Research Labs (Oregon), 1998.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.1550&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[8] Jin, Shudong and Azer Bestavros, &amp;lt;i&amp;gt;Sources and Characteristics of Web Temporal Locality&amp;lt;/i&amp;gt;, Computer Science Depratment - Boston University, Boston. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.94.5941&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[9] Agarwal, Vikas &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Clock Rate versus IPS: The End of the Road for Conventional Microarhitechtures&amp;lt;/i&amp;gt;, University of Texas, Austin, 2000.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.3694&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[10] Tuomi, Ilkka, &amp;lt;i&amp;gt;The Lives and Death of Moore&#039;s Law&amp;lt;/i&amp;gt;, 2002.[http://131.193.153.231/www/issues/issue7_11/tuomi/ HTML]&lt;br /&gt;
&lt;br /&gt;
[11] BARHAM, P., DRAGOVIC, B., FRASER, K., HAND, S., HARRIS, T., HO, A., NEUGEBAUER, R., PRATT, I., AND WARFIELD, A. Xen and the art of virtualization. In Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP) (2003), pp. 164–177.&lt;br /&gt;
&lt;br /&gt;
[12] LARUS, J., AND PARKES, M. Using Cohort-Scheduling to Enhance Server Performance. In Proceedings of the annual conference on USENIX Annual Technical Conference (ATEC) (2002), pp. 103–114.&lt;br /&gt;
&lt;br /&gt;
[13] ARON, M., AND DRUSCHEL, P. Soft timers: efficient microsecond software timer support for network processing. ACM Trans. Comput. Syst. (TOCS) 18, 3 (2000), 197–228.&lt;br /&gt;
&lt;br /&gt;
[14] DRUSCHEL, P., AND BANGA, G. Lazy receiver processing (LRP): a network subsystem architecture for server systems. In Proceedings of the 2nd USENIX Symposium on Operating Systems Design and Implementation (OSDI) (1996), pp. 261–275.&lt;br /&gt;
&lt;br /&gt;
[15] CHAKRABORTY, K., WELLS, P. M., AND SOHI, G. S. Computation Spreading: Employing Hardware Migration to Specialize CMP Cores On-the-fly. In Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) (2006), pp. 283–292.&lt;br /&gt;
&lt;br /&gt;
[16] Vasudevan, Vijay. &amp;lt;i&amp;gt;Improving Datacenter Energy Efficiency Using a Fast Array of Wimpy Nodes&amp;lt;/i&amp;gt;, Thesis Proposal, Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, October 12, 2010.[http://www.cs.cmu.edu/~vrv/proposal/vijay_thesis_proposal.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[17] Patricia J. Teller &amp;lt;i&amp;gt;Translation-Lookaside Buffer Consistency&amp;lt;/i&amp;gt;, Journal Volume 23 Issue 6, IBM T. J. Watson Research Center, Yorktown Heights, NY, June 1990. [http://dx.doi.org/10.1109/2.55498 HTML]&lt;br /&gt;
&lt;br /&gt;
[18] Linux ABI sourceforge page. [http://linux-abi.sourceforge.net/ HTML] and Linux application page. [http://www.linux.org/apps/AppId_8088.html HTML]&lt;br /&gt;
&lt;br /&gt;
[19] DREPPER, U., AND MOLNAR , I. &amp;lt;i&amp;gt;The Native POSIX Thread Library for Linux&amp;lt;/i&amp;gt;. Tech. rep., RedHat Inc, 2003. [http://people.redhat.com/drepper/nptl-design.pdf HTML]&lt;br /&gt;
&lt;br /&gt;
[20] M. Brian Blake, &amp;lt;i&amp;gt;Coordinating Multiple Agents for Workflow-Oriented Process Orchestration&amp;lt;/i&amp;gt;. Information Systems and e-Business Management Journal, Springer-Verlag, December 2003. [http://www.cs.georgetown.edu/~blakeb/pubs/blake_ISEB2003.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[21] DeveloperWorks, Kernel Command using Linux System Calls IBM,2010.[http://www.ibm.com/developerworks/linux/library/l-system-calls/ ]&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6191</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6191"/>
		<updated>2010-12-02T05:00:34Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay. It is essential for one to comprehend the basic concepts of the language that is spoken in the paper in order to fully understand the ideas that are being discussed. The most important notions discussed in the FlexSC paper that are at the core of it all, are System calls[21], and Sychronous systems. These base definitons along with numerous other helpful ideas can be understood through the section of the paper to follow. &lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
In order to fully understand the FlexSC paper, it is essential to understand the key concepts that are discussed within the paper. Here listed below, are the main concepts required to fully comprehend the paper. It is more vital to the reader to understand the core ideas of these defintions along with the underlying motivation for there existance, then to understand the miniscule details of their processes. &lt;br /&gt;
&lt;br /&gt;
===System Call===&lt;br /&gt;
A &amp;lt;b&amp;gt;System Call&amp;lt;/b&amp;gt; is the gateway between the User Space and the Kernel Space. The User Space is not given direct access to the Kernel&#039;s services, for several reasons (one being security), hence System calls are the messengers between the User and Kernel Space.[1][4]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode Switch===&lt;br /&gt;
&amp;lt;b&amp;gt;Mode Switches&amp;lt;/b&amp;gt; speak of moving from one medium to another. Specifically moving from the User Space mode to the Kernel mode or Kernel mode to User Space. It does not matter which direction or which modes we are swtiching from, this is simply a general term. Crucial to mode switching is the &amp;lt;b&amp;gt;mode switch time&amp;lt;/b&amp;gt; which is the time necessary to execute a system call instruction in user-mode, perform the kernel mode execution of the system call, and finally return the execution back to user-mode.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Synchronous System Call===&lt;br /&gt;
&amp;lt;b&amp;gt;Synchronous Execution Model(System call Interface)&amp;lt;/b&amp;gt; refers to the structure in which system calls specifically are managed in a serialized manner. Moreover, the synchronous model completes one system call at a time, and does not move onto the next system call until the previous system call is finished executing. This form of system call is blocking, meaning the process which initiates the system call is blocked until the system call returns. Traditionally, operating system calls are mostly synchronous system calls.[1][2]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Asynchronous System Call===&lt;br /&gt;
An &amp;lt;b&amp;gt;asynchronous system call&amp;lt;/b&amp;gt; is a system call which does not block upon invocation; control of execution is returned to the calling process immediately. Asynchronous system calls do not necessarily execute in order and can be compared to event driven programming.[2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Pollution===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Pollution&amp;lt;/b&amp;gt; is a more sophisticated manner of referring to wasteful or un-necessary delay in the system caused by system calls. This pollution is in direct correlation with the fact that the system call invokes a mode switch which is not a costless task. The &amp;quot;pollution&amp;quot; involved takes the form of data over-written in critical processor structures like the TLB (translation look-aside buffer - table which reduces the frequency of main memory access for page table entries), branch prediction tables, and the cache (L1, L2, L3).[1][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Processor Exceptions===&lt;br /&gt;
&amp;lt;b&amp;gt;Processor exceptions&amp;lt;/b&amp;gt; are situations which cause the processor to stop current execution unexpectedly in order to handle the issue. There are many situations which generate processor exceptions including undefined instructions and software interrupts(system calls).[5]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Batching&amp;lt;/b&amp;gt; is the concept of collecting system calls together to be executed in a group instead of executing them immediately after they are called.[6]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Temporal and Spatial Locality===&lt;br /&gt;
Locality is the concept that during execution there will be a tendency for the same set of data to be accessed repeatedly over a brief time period. There are two important forms of locality; &amp;lt;b&amp;gt; spatial locality&amp;lt;/b&amp;gt; and &amp;lt;b&amp;gt;temporal locality&amp;lt;/b&amp;gt;. Spatial locality refers to the pattern that memory locations in close physical proximity will be referenced close together in a short period of time. Temporal locality, on the other hand, is the tendency of recently requested memory locations to be requested again.[7][8]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Instructions Per Cycle (IPC)===&lt;br /&gt;
&amp;lt;b&amp;gt;Instructions per cycle&amp;lt;/b&amp;gt; is the amount of instructions a processor can execute in a single clock cycle.[9]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
will add the following terms.&amp;lt;br&amp;gt;&lt;br /&gt;
TODO-Start --[[User:Tafatah|Tafatah]] 16:13, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I did some of these, some I don&#039;t think I can adequately explain, or just have no idea what they are, so I left them. --[[User:CFaibish|CFaibish]] 00:31, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Translation Look-Aside Buffer (TLB)===&lt;br /&gt;
A TLB is a table used in a virtual memory system that lists the physical address page number associated with each virtual address page number. A TLB is used in conjunction with a cache whose tags are based on virtual addresses. The virtual address is presented simultaneously to the TLB and to the cache so that cache access and the virtual-to-physical address translation can proceed in parallel. If the requested address is not cached then the physical address is used to locate the data in main memory. &lt;br /&gt;
&lt;br /&gt;
The TLB is the reason context switches can have such large performance penalties. Every time the OS switches context, the entire buffer is flushed. When the process resumes, it must be rebuilt from scratch. Too many context switches will therefore cause an increase in cache misses and degrade performance.[17]&lt;br /&gt;
&lt;br /&gt;
===Lack of Locality ===&lt;br /&gt;
As per paper, locality refers to both types of locality, i.e. temporal and spatial, defined above. Thus, lack of locality here means data and instructions needed most frequently by the application continues to be switched back and forth (from registers and caches) due to system calls, attributing hence, to performance degradation.&lt;br /&gt;
&lt;br /&gt;
===Throughput ===&lt;br /&gt;
Is an indication of how much work is done during a unit of time. E.g. n transactions per hour. The higher n is, the better. [2. P151]&lt;br /&gt;
&lt;br /&gt;
===Regular Store Instructions ===&lt;br /&gt;
A store instruction simply refers to a typical assembly language instruction, where, usually, there are two arguments. A value, and a memory location, where that value should be stored.&lt;br /&gt;
&lt;br /&gt;
===Linux Application Binary Interface (ABI)===&lt;br /&gt;
The ABI is a patch to the kernel that allows you to run SCO, Xenix, Solaris ix86, and other binaries on Linux.[18]&lt;br /&gt;
&lt;br /&gt;
===Native POSIX Thread Library (NPTL)===&lt;br /&gt;
NPTL is a software component that allows the Linux kernel to run applications optimized for POSIX Thread efficiency.[19]&lt;br /&gt;
&lt;br /&gt;
===Syscall Page ===&lt;br /&gt;
&lt;br /&gt;
===Syscall Threads ===&lt;br /&gt;
&lt;br /&gt;
===Inter-Process Interrupt ===&lt;br /&gt;
&lt;br /&gt;
===Latency ===&lt;br /&gt;
Latency is a measure of the time delay between the start of an action and its completion in a system.[20]&lt;br /&gt;
&lt;br /&gt;
===Producer-Consumer Problem ===&lt;br /&gt;
Note: explain the relationship &lt;br /&gt;
&lt;br /&gt;
TODO End --[[User:Tafatah|Tafatah]] 16:13, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
System calls provide an interface for user-mode applications to request services from the operating system. Traditionally, the system call interface has been implemented using synchronous system calls, which block the calling user-space process when the system call is initiated. The benefit of using synchronous system calls comes from the easy to program nature of having sequential operation. However, this ease of use also comes with undesireable side effects which can slow down the instructions per cycle (IPC) of the processor.[9] In &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, Soares and Stumm attempt to provide a new form of system call which minimizes the negative effects of synchronous system calls while still remaining easy to implement for application programmers.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The negative effects of synchronous system calls have been researched heavily, it is accepted that although easy to use, they are not optimal. Previous research includes work into &amp;lt;b&amp;gt;system call batching&amp;lt;/b&amp;gt; such as multi-calls[6], &amp;lt;b&amp;gt;locality of execution with multicore systems&amp;lt;/b&amp;gt;[7][8], and &amp;lt;b&amp;gt;non-blocking execution&amp;lt;/b&amp;gt;. System call batching shares great similarity with FlexSC as multiple system calls are grouped together to reduce the amount of mode switches required of the system.[6] The difference is multi-calls do not make use of parallel execution of system calls nor do they manage the blocking aspect of synchronous system calls. FlexSC describes methods to handle both of these situations as described in the &amp;lt;b&amp;gt;Contribution&amp;lt;/b&amp;gt; section of this document.[1] Previous research into locality of execution and multicore systems has focused on managing device interrupts and limiting processor interference associated with interrupt handling.[7][8] However, these solutions require a microkernel solution and although they can dedicate certain execution to specific cores of a system, they can not dynamically adapt to the proportion of cores used by the kernel and the cores shared between the kernel and the user like FlexSC can.[1] Non-blocking execution research has focused on threading, event-based (non-blocking) and hybrid solutions. However, FlexSC provides a mechanism to separate system call execution from system call invocation. This is a key difference between FlexSC and previous research.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
===Exception-Less System Calls===&lt;br /&gt;
Exception-less system calls are the research team&#039;s attempt to provide an alternative to synchronous systems calls. The downside to synchronous system calls includes the cumulative mode switch time of multiple system calls each called independently, state pollution of key processor structures (TLB, cache, etc.)[1][3], and, potentially the most crucial, the performance impact on the user-mode application during a system call. Exception-less system calls attempt to resolve these three issues through:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
1. &amp;lt;u&amp;gt;System Call Batching:&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Instead of having each system call run as soon as it is called, FlexSC instead groups together system calls into batches. These batches can then be executed at one time thus minimizing the     frequency of mode switches bewteen user and kernel modes. Batching provides a benefit both in terms of the direct cost of mode switching as well as the indirect cost, pollution of critical processor structures, associated with switching modes. System call batching works by first requesting as many system calls as possible, then switching to kernel mode, and then executing each of them.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2. &amp;lt;u&amp;gt;Core Specialization&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
On a multi-core system, FlexSC can provide the ability to designate a single core to run all system calls. The reason this is possible is that for an exception-less system call, the system call execution is decoupled from the system call invocation. This is described further in &amp;lt;b&amp;gt;Decoupling Execution from Invocation&amp;lt;/b&amp;gt; section below.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
3. &amp;lt;u&amp;gt;Exception-less System Call Interface&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
To provide an asynchronous interface to the kernel, FlexSC uses &amp;lt;b&amp;gt;syscall pages&amp;lt;/b&amp;gt;. Syscall pages are a set of memory pages shared between user-mode and kernel-mode. User-space threads interact with syscall pages in order to make a request (system call) for kernel-mode procedures. A user-mode thread may make a system call request on a free entry of a syscall page, the syscall page will then run once the batch condition is met and store the return value on the syscall page. The user-mode thread can then return to the syscall page to obtain the return value. Neither issuing the system call via the syscall page nor getting the return value from the syscall page generate a processor exception. Each syscall page is a table of syscall entries. These entries may have one of three states: &amp;lt;b&amp;gt;Free&amp;lt;/b&amp;gt; - meaning a syscall can be added to the entry; &amp;lt;b&amp;gt;Submitted&amp;lt;/b&amp;gt; - meaning the kernel can proceed to invoke the appropriate system call operations; and &amp;lt;b&amp;gt;Done&amp;lt;/b&amp;gt; - meaning the kernel is finished and the return value is ready for the user-mode thread to retrieve it.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
4. &amp;lt;u&amp;gt;Decoupling Execution from Invocation&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In order to separate a system call invocation from the execution of the system call, &amp;lt;b&amp;gt;syscall threads&amp;lt;/b&amp;gt; were created. The sole purpose of syscall threads is to pull requests from syscall pages and execute the request, always in kernel mode. This is the mechanic that allows exception-less system calls to provide the ability for a user-mode thread to issue a request and continue to run while the kernel level system call is being executed. In addition, since the system call invocation is separate from execution, a process running on one core may request a system call yet the execution of the system call may be completed on an entirely different core. This allows exception-less system calls the unique capability of having all system call execution delegated to a specific core while other cores maintain user-mode execution.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===FlexSC Threads===&lt;br /&gt;
As mentioned above, FlexSC threads are a key component of the exception-less system call interface. FlexSC threads transform regular, synchronous system calls into exception-less system calls and are compatible with both the POSIX and default Linux thread libraries. This means that FlexSC Threads are immediately capable of running multi-threaded Linux applications with no modifications. The intended use of these threads is with server-type applications which contain many user-mode threads. In order to accomodate multiple user-mode threads, the FlexSC interface provides a syscall page for each core of a system. In this manner, multiple user-mode threads can be multiplexed onto a single syscall page which in turn has a single kernel level thread to facilitate execution of the system calls. Programming with FlexSC threads can be compared to event-driven programming as interactions are not guaranteed to be sequential. This does increase the complexity of programming for an exception-less system call interface as compared to the relatively simple synchronous system call interface.[1][2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Critique: ==&lt;br /&gt;
&lt;br /&gt;
===Moore&#039;s Law===&lt;br /&gt;
One interesting aspect of this paper is how the research relates to Moore&#039;s Law. Moore&#039;s Law states that the number of transistors on a chip doubles every 18 months.[10]. This has lead to very large increases in the performance potential of software but at the same time has opened a large gap between the actual performance of efficient and inefficient software. This paper claims that the gap is mainly caused by disparity of accessing different processor resources such as registers, cache and memory.[1] In this manner, the FlexSC interface is not just an attempt to increase the efficiency of current system calls, but it is actually an attempt to change the way we view software. It is not simply enough to continue to build more powerful machines if the code we currently run will not speed up (become more efficient) along with the gain of power. Instead we need to focus on appropriate allocation and usage of the power as failure to do so is the origination of the gap between our potential and our performance.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Performance of FlexSC===&lt;br /&gt;
It is of particular interest to note that exception-less system calls only outperformed synchronous system calls when the system was running multiple system calls. For an individual system call, the overhead of the FlexSC interface was greater than a synchronous call. The real benefit of FlexSC comes when there are many system calls which can be in turn batched before execution. In this situation the FlexSC system far outperformed the traditional synchronous system calls.[1] This is why the research paper&#039;s focus is on server-like applications as server must handle many user requests efficiently to be useful. Thus, for a general case it appears that a hybrid solution of synchronous calls below some threshold and execption-less system calls above the same threshold would be most efficient.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Blocking Calls===&lt;br /&gt;
FlexSC relies on the fact that web and database servers have a lot of concurrency and independent parallelism. FlexSC can &#039;harvest&#039; enough independent work so that it doesn&#039;t need to track dependencies between system calls. However, this could be a problem in other situations. Since FlexSC system calls are &#039;inherently asynchronous&#039;, if they need to block, FlexSC would jump to the next system call and execute that one. This can cause a problem for system calls such as reading and writing, where the write call has an outstanding dependency on the read call. However, this could be resolved by using some kind of combined system call, that is, multiple system calls executed as one single call. Unfortunately, FlexSC does not have any current handling for such an implementation.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Core Scheduling Issues===&lt;br /&gt;
In a system with X cores, FlexSC needs to dedicate some subset of cores for system calls. Currently, FlexSC first wakes up core X to run a system call thread, and when another batch comes in, if core X is still busy, it will then try core X-1, and so on. Of all the algorithms they tested, it turned out that this, the simplest algorithm, was the most efficient algorithm for FlexSC scheduling. However, this was only tested with FlexSC running a single application at a time. FlexSC&#039;s scheduling algorithm would need to be fine-tuned for running multiple applications.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When There Are Not More Threads Then Cores===&lt;br /&gt;
In situations where there is a single thread using 100% of a CPU, and acting primarily in user-space, such as &#039;Scientific Programs&#039;, FlexSC causes more overhead then performance gains. As a result, FlexSC is not an optimal implementation for cases such as this.&lt;br /&gt;
&lt;br /&gt;
===IO === &lt;br /&gt;
FlexSC is not suited for data intensive, IO centric applications, as realized by Vijay Vasudevan [16]. Vijay&#039;s research aims to reduce the energy footprint in data centers. FlexSC&amp;lt;br&amp;gt;&lt;br /&gt;
was considered. It was found that FlexSC&#039;s reduction of mode switches, via the use of shared memory pages between user space and kernel space is useful for reducing the impact&amp;lt;br&amp;gt;&lt;br /&gt;
of system calls. That technique however was not useful for IO intensive work since it did not remove the requirement of data copying and did not reduce the overheads associated&amp;lt;br&amp;gt;&lt;br /&gt;
with interrupts in IO intensive tasks.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Some Kernel Changes Are Required===&lt;br /&gt;
Though most of the work is done transparently. i.e. there is no need for application&#039;s code modification, there remains a need for small kernel change (3 lines of code), as per section 3.2 of the paper [1].&amp;lt;br&amp;gt;&lt;br /&gt;
That means adopters, and after each update of the kernel, would have to add/modify the referenced lines and then recompile the kernel.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Multicore Systems ===&lt;br /&gt;
For a multicore system, the FlexSC scheduler will attempt to choose a subset of the available cores and specialize them for running system call threads. It is unclear how the dynamic&amp;lt;br&amp;gt;&lt;br /&gt;
allocation is done. It is mentioned that decisions are made based on the workload requirements, which doesn&#039;t exactly clarify the mechanism.&amp;lt;br&amp;gt;&lt;br /&gt;
Further, the paper mentions that a predefined, static list of cores is used for system call threads assignments. It is unclear when that list is created. Is it at installation time,&amp;lt;br&amp;gt;&lt;br /&gt;
is it generated initially, or does the installer have to do any manual work. On a related note, scalability with increased cores is ambiguous. It is not that clear how scalable the&amp;lt;br&amp;gt;&lt;br /&gt;
scheduler is. One gets the impression that it is very scalable due to the fact that each core spawns a system call thread. Thus, as many threads as there are cores could be running&amp;lt;br&amp;gt;&lt;br /&gt;
concurrently, for one or more processes [1]. More explicit results however would&#039;ve been beneficial. Further, the paper mentions that hyper-threading was turned off to ease the analysis&amp;lt;br&amp;gt;&lt;br /&gt;
of the results. Understandable, however, it would be nice to know if these threads (2 per core) would actually be treated as a core when turned on ? I.e. would the scheduler then realize&amp;lt;br&amp;gt;&lt;br /&gt;
that it can use eight cores ? Does that also mean the predefined static cores list would need to be modified, to list eight instead of four ?&amp;lt;br&amp;gt;  &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
Along the same reasoning, and given the growing popularity of GPU&#039;s use for general programming, it would&#039;ve been useful to at-least hypothesize on the possible performance&amp;lt;br&amp;gt;&lt;br /&gt;
outcome when using specialized GPUs, like NVIDIA&#039;s Tesla GPUs for example. Would FlexSC&#039;s scheduler be able to take advantage of the additional cores, and hence use them for&amp;lt;br&amp;gt;&lt;br /&gt;
specialized purposes ?&lt;br /&gt;
&lt;br /&gt;
== Related Work: ==&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&lt;br /&gt;
Muti-calls is a concept which involves collecting multiple system calls and submitting them as a single system call. It is used both in operating systems and paravirtualized hypervisors. The Cassyopia compiler has a special technique name a looped multi-call, which is an additional process where the result of one system call can be fed as an argument to another system call in the same multi-call.[11] There is a significant difference between multi-calls and exception-less system calls. Multi-calls do not investigate parallel execution of system calls, nor do they address the blocking of system calls like exception-less system calls. Multi-call system calls are executed sequentially, each one must complete before the next may start. On the other hand, exception-less system calls can be executed in parallel, and in the presence of blocking, the next call can execute immediately.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Locality of Execution and Multicores===&lt;br /&gt;
&lt;br /&gt;
Several techniques addressed the issue of locality of execution. Larus and Parkes proposed Cohort Scheduling to efficiently execute staged computations.[12] Other techniques include Soft Timers[13] and Lazy Receiver[14] Processing which try to tackle the issue of locality of execution by handling device interrupts. They both try to limit processor interference associated with interrupt handling without affecting the latency of servicing requests. Another technique name Computation Spreading[15] is most similar to the multicore execution of FlexSC. Processor modifications that allow hardware migration of threads and migration to specialized cores. However, they did not model TLBs on current hardware synchronous thread migration is a costly interprocessor interrupt. Another solution has 2 difference between FlexSC. They require a micro-kernel. Also FlexSC can dynamically adapt the proportion of cores used by the kernel or cores shared by user and kernel execution dynamically. While all these solutions rely on expensive inter-processor interrupts to offload system calls, FlexSC could provide a more efficient, and flexible mechanism.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Non-blocking Execution===&lt;br /&gt;
&lt;br /&gt;
Past research on improving system call performance has focused extensively on blocking versus non-blocking behavior. Typically researchers used threading, event-based, which is non-blocking and hybrid systems to obtain high performance on server applications. The main difference between many of the proposals for non-blocking execution and FlexSC is that none of the non-blocking system calls have decoupled the system call invocation from its execution.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References: ==&lt;br /&gt;
[1] Soares, Livio and Michael Stumm, &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, University of Toronto, 2010.[http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Tanenbaum, Andrew S., &amp;lt;i&amp;gt;Modern Operating Systems: 3rd Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2008.&lt;br /&gt;
&lt;br /&gt;
[3] Stallings, William, &amp;lt;i&amp;gt;Operating Systems: Internals and Design Principles - 6th Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2009.&lt;br /&gt;
&lt;br /&gt;
[4] Garfinkel, Tim, &amp;lt;i&amp;gt;Traps and Pitfalls: Practical Problems in System Call Interposition Based Security Tools&amp;lt;/i&amp;gt;, Computer Science Department - Stanford University.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.144.2695&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[5] Yoo, Sunjoo &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Automatic Generation of Fast Timed Simulation Models for Operating Systems in SoC Design&amp;lt;/i&amp;gt;, SLS Group, TIMA Laboratory, Grenoble, 2002.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.1148&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[6] Rajagopalan, Mohan &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Cassyopia: Compiler Assisted System Optimization&amp;lt;/i&amp;gt;, Poceedings of HotOS IX: The 9th Workshop on Hot Topics in Operating Systems, Lihue, Hawaii, 2003.[https://www.usenix.org/events/hotos03/tech/full_papers/rajagopalan/rajagopalan.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[7] Kumar, Sanjeev and Christopher Wilkerson, &amp;lt;i&amp;gt;Exploiting Spatial Locality in Data Caches using Spatial Footprints&amp;lt;/i&amp;gt;, Princeton University and Microcomputer Research Labs (Oregon), 1998.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.1550&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[8] Jin, Shudong and Azer Bestavros, &amp;lt;i&amp;gt;Sources and Characteristics of Web Temporal Locality&amp;lt;/i&amp;gt;, Computer Science Depratment - Boston University, Boston. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.94.5941&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[9] Agarwal, Vikas &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Clock Rate versus IPS: The End of the Road for Conventional Microarhitechtures&amp;lt;/i&amp;gt;, University of Texas, Austin, 2000.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.3694&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[10] Tuomi, Ilkka, &amp;lt;i&amp;gt;The Lives and Death of Moore&#039;s Law&amp;lt;/i&amp;gt;, 2002.[http://131.193.153.231/www/issues/issue7_11/tuomi/ HTML]&lt;br /&gt;
&lt;br /&gt;
[11] BARHAM, P., DRAGOVIC, B., FRASER, K., HAND, S., HARRIS, T., HO, A., NEUGEBAUER, R., PRATT, I., AND WARFIELD, A. Xen and the art of virtualization. In Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP) (2003), pp. 164–177.&lt;br /&gt;
&lt;br /&gt;
[12] LARUS, J., AND PARKES, M. Using Cohort-Scheduling to Enhance Server Performance. In Proceedings of the annual conference on USENIX Annual Technical Conference (ATEC) (2002), pp. 103–114.&lt;br /&gt;
&lt;br /&gt;
[13] ARON, M., AND DRUSCHEL, P. Soft timers: efficient microsecond software timer support for network processing. ACM Trans. Comput. Syst. (TOCS) 18, 3 (2000), 197–228.&lt;br /&gt;
&lt;br /&gt;
[14] DRUSCHEL, P., AND BANGA, G. Lazy receiver processing (LRP): a network subsystem architecture for server systems. In Proceedings of the 2nd USENIX Symposium on Operating Systems Design and Implementation (OSDI) (1996), pp. 261–275.&lt;br /&gt;
&lt;br /&gt;
[15] CHAKRABORTY, K., WELLS, P. M., AND SOHI, G. S. Computation Spreading: Employing Hardware Migration to Specialize CMP Cores On-the-fly. In Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) (2006), pp. 283–292.&lt;br /&gt;
&lt;br /&gt;
[16] Vasudevan, Vijay. &amp;lt;i&amp;gt;Improving Datacenter Energy Efficiency Using a Fast Array of Wimpy Nodes&amp;lt;/i&amp;gt;, Thesis Proposal, Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, October 12, 2010.[http://www.cs.cmu.edu/~vrv/proposal/vijay_thesis_proposal.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[17] Patricia J. Teller &amp;lt;i&amp;gt;Translation-Lookaside Buffer Consistency&amp;lt;/i&amp;gt;, Journal Volume 23 Issue 6, IBM T. J. Watson Research Center, Yorktown Heights, NY, June 1990. [http://dx.doi.org/10.1109/2.55498 HTML]&lt;br /&gt;
&lt;br /&gt;
[18] Linux ABI sourceforge page. [http://linux-abi.sourceforge.net/ HTML] and Linux application page. [http://www.linux.org/apps/AppId_8088.html HTML]&lt;br /&gt;
&lt;br /&gt;
[19] DREPPER, U., AND MOLNAR , I. &amp;lt;i&amp;gt;The Native POSIX Thread Library for Linux&amp;lt;/i&amp;gt;. Tech. rep., RedHat Inc, 2003. [http://people.redhat.com/drepper/nptl-design.pdf HTML]&lt;br /&gt;
&lt;br /&gt;
[20] M. Brian Blake, &amp;lt;i&amp;gt;Coordinating Multiple Agents for Workflow-Oriented Process Orchestration&amp;lt;/i&amp;gt;. Information Systems and e-Business Management Journal, Springer-Verlag, December 2003. [http://www.cs.georgetown.edu/~blakeb/pubs/blake_ISEB2003.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[21] DeveloperWorks, Kernel Command using Linux System Calls IBM,2010.[http://www.ibm.com/developerworks/linux/library/l-system-calls/ ]&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6184</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6184"/>
		<updated>2010-12-02T04:44:21Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay. It is essential for one to comprehend the basic concepts of the language that is spoken in the paper in order to fully understand the ideas that are being discussed. The most important notions discussed in the FlexSC paper that are at the core of it all, are System calls[21], and Sychronous systems. These base definitons can be understood through the Background Concepts heading available in the next section of this paper. &lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
In order to fully understand the FlexSC paper, it is essential to understand the key concepts that are discussed within the paper. Here listed below, are the main concepts required to fully comprehend the paper. &lt;br /&gt;
&lt;br /&gt;
===System Call===&lt;br /&gt;
A &amp;lt;b&amp;gt;System Call&amp;lt;/b&amp;gt; is the gateway between the User Space and the Kernel Space. The User Space is not given direct access to the Kernel&#039;s services, for several reasons (one being security), hence System calls are the messengers between the User and Kernel Space.[1][4]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode Switch===&lt;br /&gt;
&amp;lt;b&amp;gt;Mode Switches&amp;lt;/b&amp;gt; speak of moving from one medium to another. Specifically moving from the User Space mode to the Kernel mode or Kernel mode to User Space. It does not matter which direction or which modes we are swtiching from, this is simply a general term. Crucial to mode switching is the &amp;lt;b&amp;gt;mode switch time&amp;lt;/b&amp;gt; which is the time necessary to execute a system call instruction in user-mode, perform the kernel mode execution of the system call, and finally return the execution back to user-mode.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Synchronous System Call===&lt;br /&gt;
&amp;lt;b&amp;gt;Synchronous Execution Model(System call Interface)&amp;lt;/b&amp;gt; refers to the structure in which system calls specifically are managed in a serialized manner. Moreover, the synchronous model completes one system call at a time, and does not move onto the next system call until the previous system call is finished executing. This form of system call is blocking, meaning the process which initiates the system call is blocked until the system call returns. Traditionally, operating system calls are mostly synchronous system calls.[1][2]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Asynchronous System Call===&lt;br /&gt;
An &amp;lt;b&amp;gt;asynchronous system call&amp;lt;/b&amp;gt; is a system call which does not block upon invocation; control of execution is returned to the calling process immediately. Asynchronous system calls do not necessarily execute in order and can be compared to event driven programming.[2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Pollution===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Pollution&amp;lt;/b&amp;gt; is a more sophisticated manner of referring to wasteful or un-necessary delay in the system caused by system calls. This pollution is in direct correlation with the fact that the system call invokes a mode switch which is not a costless task. The &amp;quot;pollution&amp;quot; involved takes the form of data over-written in critical processor structures like the TLB (translation look-aside buffer - table which reduces the frequency of main memory access for page table entries), branch prediction tables, and the cache (L1, L2, L3).[1][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Processor Exceptions===&lt;br /&gt;
&amp;lt;b&amp;gt;Processor exceptions&amp;lt;/b&amp;gt; are situations which cause the processor to stop current execution unexpectedly in order to handle the issue. There are many situations which generate processor exceptions including undefined instructions and software interrupts(system calls).[5]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Batching&amp;lt;/b&amp;gt; is the concept of collecting system calls together to be executed in a group instead of executing them immediately after they are called.[6]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Temporal and Spatial Locality===&lt;br /&gt;
Locality is the concept that during execution there will be a tendency for the same set of data to be accessed repeatedly over a brief time period. There are two important forms of locality; &amp;lt;b&amp;gt; spatial locality&amp;lt;/b&amp;gt; and &amp;lt;b&amp;gt;temporal locality&amp;lt;/b&amp;gt;. Spatial locality refers to the pattern that memory locations in close physical proximity will be referenced close together in a short period of time. Temporal locality, on the other hand, is the tendency of recently requested memory locations to be requested again.[7][8]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Instructions Per Cycle (IPC)===&lt;br /&gt;
&amp;lt;b&amp;gt;Instructions per cycle&amp;lt;/b&amp;gt; is the amount of instructions a processor can execute in a single clock cycle.[9]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
will add the following terms.&amp;lt;br&amp;gt;&lt;br /&gt;
TODO-Start --[[User:Tafatah|Tafatah]] 16:13, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I did some of these, some I don&#039;t think I can adequately explain, or just have no idea what they are, so I left them. --[[User:CFaibish|CFaibish]] 00:31, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Translation Look-Aside Buffer (TLB)===&lt;br /&gt;
A TLB is a table used in a virtual memory system that lists the physical address page number associated with each virtual address page number. A TLB is used in conjunction with a cache whose tags are based on virtual addresses. The virtual address is presented simultaneously to the TLB and to the cache so that cache access and the virtual-to-physical address translation can proceed in parallel. If the requested address is not cached then the physical address is used to locate the data in main memory. &lt;br /&gt;
&lt;br /&gt;
The TLB is the reason context switches can have such large performance penalties. Every time the OS switches context, the entire buffer is flushed. When the process resumes, it must be rebuilt from scratch. Too many context switches will therefore cause an increase in cache misses and degrade performance.[17]&lt;br /&gt;
&lt;br /&gt;
===Lack of Locality ===&lt;br /&gt;
As per paper, locality refers to both types of locality, i.e. temporal and spatial, defined above. Thus, lack of locality here means data and instructions needed most frequently by the application continues to be switched back and forth (from registers and caches) due to system calls, attributing hence, to performance degradation.&lt;br /&gt;
&lt;br /&gt;
===Throughput ===&lt;br /&gt;
Is an indication of how much work is done during a unit of time. E.g. n transactions per hour. The higher n is, the better. [2. P151]&lt;br /&gt;
&lt;br /&gt;
===Regular Store Instructions ===&lt;br /&gt;
A store instruction simply refers to a typical assembly language instruction, where, usually, there are two arguments. A value, and a memory location, where that value should be stored.&lt;br /&gt;
&lt;br /&gt;
===Linux Application Binary Interface (ABI)===&lt;br /&gt;
The ABI is a patch to the kernel that allows you to run SCO, Xenix, Solaris ix86, and other binaries on Linux.[18]&lt;br /&gt;
&lt;br /&gt;
===Native POSIX Thread Library (NPTL)===&lt;br /&gt;
NPTL is a software component that allows the Linux kernel to run applications optimized for POSIX Thread efficiency.[19]&lt;br /&gt;
&lt;br /&gt;
===Syscall Page ===&lt;br /&gt;
&lt;br /&gt;
===Syscall Threads ===&lt;br /&gt;
&lt;br /&gt;
===Inter-Process Interrupt ===&lt;br /&gt;
&lt;br /&gt;
===Latency ===&lt;br /&gt;
Latency is a measure of the time delay between the start of an action and its completion in a system.[20]&lt;br /&gt;
&lt;br /&gt;
===Producer-Consumer Problem ===&lt;br /&gt;
Note: explain the relationship &lt;br /&gt;
&lt;br /&gt;
TODO End --[[User:Tafatah|Tafatah]] 16:13, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
System calls provide an interface for user-mode applications to request services from the operating system. Traditionally, the system call interface has been implemented using synchronous system calls, which block the calling user-space process when the system call is initiated. The benefit of using synchronous system calls comes from the easy to program nature of having sequential operation. However, this ease of use also comes with undesireable side effects which can slow down the instructions per cycle (IPC) of the processor.[9] In &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, Soares and Stumm attempt to provide a new form of system call which minimizes the negative effects of synchronous system calls while still remaining easy to implement for application programmers.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The negative effects of synchronous system calls have been researched heavily, it is accepted that although easy to use, they are not optimal. Previous research includes work into &amp;lt;b&amp;gt;system call batching&amp;lt;/b&amp;gt; such as multi-calls[6], &amp;lt;b&amp;gt;locality of execution with multicore systems&amp;lt;/b&amp;gt;[7][8], and &amp;lt;b&amp;gt;non-blocking execution&amp;lt;/b&amp;gt;. System call batching shares great similarity with FlexSC as multiple system calls are grouped together to reduce the amount of mode switches required of the system.[6] The difference is multi-calls do not make use of parallel execution of system calls nor do they manage the blocking aspect of synchronous system calls. FlexSC describes methods to handle both of these situations as described in the &amp;lt;b&amp;gt;Contribution&amp;lt;/b&amp;gt; section of this document.[1] Previous research into locality of execution and multicore systems has focused on managing device interrupts and limiting processor interference associated with interrupt handling.[7][8] However, these solutions require a microkernel solution and although they can dedicate certain execution to specific cores of a system, they can not dynamically adapt to the proportion of cores used by the kernel and the cores shared between the kernel and the user like FlexSC can.[1] Non-blocking execution research has focused on threading, event-based (non-blocking) and hybrid solutions. However, FlexSC provides a mechanism to separate system call execution from system call invocation. This is a key difference between FlexSC and previous research.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
===Exception-Less System Calls===&lt;br /&gt;
Exception-less system calls are the research team&#039;s attempt to provide an alternative to synchronous systems calls. The downside to synchronous system calls includes the cumulative mode switch time of multiple system calls each called independently, state pollution of key processor structures (TLB, cache, etc.)[1][3], and, potentially the most crucial, the performance impact on the user-mode application during a system call. Exception-less system calls attempt to resolve these three issues through:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
1. &amp;lt;u&amp;gt;System Call Batching:&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Instead of having each system call run as soon as it is called, FlexSC instead groups together system calls into batches. These batches can then be executed at one time thus minimizing the     frequency of mode switches bewteen user and kernel modes. Batching provides a benefit both in terms of the direct cost of mode switching as well as the indirect cost, pollution of critical processor structures, associated with switching modes. System call batching works by first requesting as many system calls as possible, then switching to kernel mode, and then executing each of them.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2. &amp;lt;u&amp;gt;Core Specialization&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
On a multi-core system, FlexSC can provide the ability to designate a single core to run all system calls. The reason this is possible is that for an exception-less system call, the system call execution is decoupled from the system call invocation. This is described further in &amp;lt;b&amp;gt;Decoupling Execution from Invocation&amp;lt;/b&amp;gt; section below.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
3. &amp;lt;u&amp;gt;Exception-less System Call Interface&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
To provide an asynchronous interface to the kernel, FlexSC uses &amp;lt;b&amp;gt;syscall pages&amp;lt;/b&amp;gt;. Syscall pages are a set of memory pages shared between user-mode and kernel-mode. User-space threads interact with syscall pages in order to make a request (system call) for kernel-mode procedures. A user-mode thread may make a system call request on a free entry of a syscall page, the syscall page will then run once the batch condition is met and store the return value on the syscall page. The user-mode thread can then return to the syscall page to obtain the return value. Neither issuing the system call via the syscall page nor getting the return value from the syscall page generate a processor exception. Each syscall page is a table of syscall entries. These entries may have one of three states: &amp;lt;b&amp;gt;Free&amp;lt;/b&amp;gt; - meaning a syscall can be added to the entry; &amp;lt;b&amp;gt;Submitted&amp;lt;/b&amp;gt; - meaning the kernel can proceed to invoke the appropriate system call operations; and &amp;lt;b&amp;gt;Done&amp;lt;/b&amp;gt; - meaning the kernel is finished and the return value is ready for the user-mode thread to retrieve it.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
4. &amp;lt;u&amp;gt;Decoupling Execution from Invocation&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In order to separate a system call invocation from the execution of the system call, &amp;lt;b&amp;gt;syscall threads&amp;lt;/b&amp;gt; were created. The sole purpose of syscall threads is to pull requests from syscall pages and execute the request, always in kernel mode. This is the mechanic that allows exception-less system calls to provide the ability for a user-mode thread to issue a request and continue to run while the kernel level system call is being executed. In addition, since the system call invocation is separate from execution, a process running on one core may request a system call yet the execution of the system call may be completed on an entirely different core. This allows exception-less system calls the unique capability of having all system call execution delegated to a specific core while other cores maintain user-mode execution.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===FlexSC Threads===&lt;br /&gt;
As mentioned above, FlexSC threads are a key component of the exception-less system call interface. FlexSC threads transform regular, synchronous system calls into exception-less system calls and are compatible with both the POSIX and default Linux thread libraries. This means that FlexSC Threads are immediately capable of running multi-threaded Linux applications with no modifications. The intended use of these threads is with server-type applications which contain many user-mode threads. In order to accomodate multiple user-mode threads, the FlexSC interface provides a syscall page for each core of a system. In this manner, multiple user-mode threads can be multiplexed onto a single syscall page which in turn has a single kernel level thread to facilitate execution of the system calls. Programming with FlexSC threads can be compared to event-driven programming as interactions are not guaranteed to be sequential. This does increase the complexity of programming for an exception-less system call interface as compared to the relatively simple synchronous system call interface.[1][2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Critique: ==&lt;br /&gt;
&lt;br /&gt;
===Moore&#039;s Law===&lt;br /&gt;
One interesting aspect of this paper is how the research relates to Moore&#039;s Law. Moore&#039;s Law states that the number of transistors on a chip doubles every 18 months.[10]. This has lead to very large increases in the performance potential of software but at the same time has opened a large gap between the actual performance of efficient and inefficient software. This paper claims that the gap is mainly caused by disparity of accessing different processor resources such as registers, cache and memory.[1] In this manner, the FlexSC interface is not just an attempt to increase the efficiency of current system calls, but it is actually an attempt to change the way we view software. It is not simply enough to continue to build more powerful machines if the code we currently run will not speed up (become more efficient) along with the gain of power. Instead we need to focus on appropriate allocation and usage of the power as failure to do so is the origination of the gap between our potential and our performance.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Performance of FlexSC===&lt;br /&gt;
It is of particular interest to note that exception-less system calls only outperformed synchronous system calls when the system was running multiple system calls. For an individual system call, the overhead of the FlexSC interface was greater than a synchronous call. The real benefit of FlexSC comes when there are many system calls which can be in turn batched before execution. In this situation the FlexSC system far outperformed the traditional synchronous system calls.[1] This is why the research paper&#039;s focus is on server-like applications as server must handle many user requests efficiently to be useful. Thus, for a general case it appears that a hybrid solution of synchronous calls below some threshold and execption-less system calls above the same threshold would be most efficient.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Blocking Calls===&lt;br /&gt;
FlexSC relies on the fact that web and database servers have a lot of concurrency and independent parallelism. FlexSC can &#039;harvest&#039; enough independent work so that it doesn&#039;t need to track dependencies between system calls. However, this could be a problem in other situations. Since FlexSC system calls are &#039;inherently asynchronous&#039;, if they need to block, FlexSC would jump to the next system call and execute that one. This can cause a problem for system calls such as reading and writing, where the write call has an outstanding dependency on the read call. However, this could be resolved by using some kind of combined system call, that is, multiple system calls executed as one single call. Unfortunately, FlexSC does not have any current handling for such an implementation.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Core Scheduling Issues===&lt;br /&gt;
In a system with X cores, FlexSC needs to dedicate some subset of cores for system calls. Currently, FlexSC first wakes up core X to run a system call thread, and when another batch comes in, if core X is still busy, it will then try core X-1, and so on. Of all the algorithms they tested, it turned out that this, the simplest algorithm, was the most efficient algorithm for FlexSC scheduling. However, this was only tested with FlexSC running a single application at a time. FlexSC&#039;s scheduling algorithm would need to be fine-tuned for running multiple applications.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When There Are Not More Threads Then Cores===&lt;br /&gt;
In situations where there is a single thread using 100% of a CPU, and acting primarily in user-space, such as &#039;Scientific Programs&#039;, FlexSC causes more overhead then performance gains. As a result, FlexSC is not an optimal implementation for cases such as this.&lt;br /&gt;
&lt;br /&gt;
===IO === &lt;br /&gt;
FlexSC is not suited for data intensive, IO centric applications, as realized by Vijay Vasudevan [16]. Vijay&#039;s research aims to reduce the energy footprint in data centers. FlexSC&amp;lt;br&amp;gt;&lt;br /&gt;
was considered. It was found that FlexSC&#039;s reduction of mode switches, via the use of shared memory pages between user space and kernel space is useful for reducing the impact&amp;lt;br&amp;gt;&lt;br /&gt;
of system calls. That technique however was not useful for IO intensive work since it did not remove the requirement of data copying and did not reduce the overheads associated&amp;lt;br&amp;gt;&lt;br /&gt;
with interrupts in IO intensive tasks.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Some Kernel Changes Are Required===&lt;br /&gt;
Though most of the work is done transparently. i.e. there is no need for application&#039;s code modification, there remains a need for small kernel change (3 lines of code), as per section 3.2 of the paper [1].&amp;lt;br&amp;gt;&lt;br /&gt;
That means adopters, and after each update of the kernel, would have to add/modify the referenced lines and then recompile the kernel.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Multicore Systems ===&lt;br /&gt;
For a multicore system, the FlexSC scheduler will attempt to choose a subset of the available cores and specialize them for running system call threads. It is unclear how the dynamic&amp;lt;br&amp;gt;&lt;br /&gt;
allocation is done. It is mentioned that decisions are made based on the workload requirements, which doesn&#039;t exactly clarify the mechanism.&amp;lt;br&amp;gt;&lt;br /&gt;
Further, the paper mentions that a predefined, static list of cores is used for system call threads assignments. It is unclear when that list is created. Is it at installation time,&amp;lt;br&amp;gt;&lt;br /&gt;
is it generated initially, or does the installer have to do any manual work. On a related note, scalability with increased cores is ambiguous. It is not that clear how scalable the&amp;lt;br&amp;gt;&lt;br /&gt;
scheduler is. One gets the impression that it is very scalable due to the fact that each core spawns a system call thread. Thus, as many threads as there are cores could be running&amp;lt;br&amp;gt;&lt;br /&gt;
concurrently, for one or more processes [1]. More explicit results however would&#039;ve been beneficial. Further, the paper mentions that hyper-threading was turned off to ease the analysis&amp;lt;br&amp;gt;&lt;br /&gt;
of the results. Understandable, however, it would be nice to know if these threads (2 per core) would actually be treated as a core when turned on ? I.e. would the scheduler then realize&amp;lt;br&amp;gt;&lt;br /&gt;
that it can use eight cores ? Does that also mean the predefined static cores list would need to be modified, to list eight instead of four ?&amp;lt;br&amp;gt;  &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
Along the same reasoning, and given the growing popularity of GPU&#039;s use for general programming, it would&#039;ve been useful to at-least hypothesize on the possible performance&amp;lt;br&amp;gt;&lt;br /&gt;
outcome when using specialized GPUs, like NVIDIA&#039;s Tesla GPUs for example. Would FlexSC&#039;s scheduler be able to take advantage of the additional cores, and hence use them for&amp;lt;br&amp;gt;&lt;br /&gt;
specialized purposes ?&lt;br /&gt;
&lt;br /&gt;
== Related Work: ==&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&lt;br /&gt;
Muti-calls is a concept which involves collecting multiple system calls and submitting them as a single system call. It is used both in operating systems and paravirtualized hypervisors. The Cassyopia compiler has a special technique name a looped multi-call, which is an additional process where the result of one system call can be fed as an argument to another system call in the same multi-call.[11] There is a significant difference between multi-calls and exception-less system calls. Multi-calls do not investigate parallel execution of system calls, nor do they address the blocking of system calls like exception-less system calls. Multi-call system calls are executed sequentially, each one must complete before the next may start. On the other hand, exception-less system calls can be executed in parallel, and in the presence of blocking, the next call can execute immediately.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Locality of Execution and Multicores===&lt;br /&gt;
&lt;br /&gt;
Several techniques addressed the issue of locality of execution. Larus and Parkes proposed Cohort Scheduling to efficiently execute staged computations.[12] Other techniques include Soft Timers[13] and Lazy Receiver[14] Processing which try to tackle the issue of locality of execution by handling device interrupts. They both try to limit processor interference associated with interrupt handling without affecting the latency of servicing requests. Another technique name Computation Spreading[15] is most similar to the multicore execution of FlexSC. Processor modifications that allow hardware migration of threads and migration to specialized cores. However, they did not model TLBs on current hardware synchronous thread migration is a costly interprocessor interrupt. Another solution has 2 difference between FlexSC. They require a micro-kernel. Also FlexSC can dynamically adapt the proportion of cores used by the kernel or cores shared by user and kernel execution dynamically. While all these solutions rely on expensive inter-processor interrupts to offload system calls, FlexSC could provide a more efficient, and flexible mechanism.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Non-blocking Execution===&lt;br /&gt;
&lt;br /&gt;
Past research on improving system call performance has focused extensively on blocking versus non-blocking behavior. Typically researchers used threading, event-based, which is non-blocking and hybrid systems to obtain high performance on server applications. The main difference between many of the proposals for non-blocking execution and FlexSC is that none of the non-blocking system calls have decoupled the system call invocation from its execution.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References: ==&lt;br /&gt;
[1] Soares, Livio and Michael Stumm, &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, University of Toronto, 2010.[http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Tanenbaum, Andrew S., &amp;lt;i&amp;gt;Modern Operating Systems: 3rd Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2008.&lt;br /&gt;
&lt;br /&gt;
[3] Stallings, William, &amp;lt;i&amp;gt;Operating Systems: Internals and Design Principles - 6th Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2009.&lt;br /&gt;
&lt;br /&gt;
[4] Garfinkel, Tim, &amp;lt;i&amp;gt;Traps and Pitfalls: Practical Problems in System Call Interposition Based Security Tools&amp;lt;/i&amp;gt;, Computer Science Department - Stanford University.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.144.2695&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[5] Yoo, Sunjoo &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Automatic Generation of Fast Timed Simulation Models for Operating Systems in SoC Design&amp;lt;/i&amp;gt;, SLS Group, TIMA Laboratory, Grenoble, 2002.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.1148&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[6] Rajagopalan, Mohan &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Cassyopia: Compiler Assisted System Optimization&amp;lt;/i&amp;gt;, Poceedings of HotOS IX: The 9th Workshop on Hot Topics in Operating Systems, Lihue, Hawaii, 2003.[https://www.usenix.org/events/hotos03/tech/full_papers/rajagopalan/rajagopalan.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[7] Kumar, Sanjeev and Christopher Wilkerson, &amp;lt;i&amp;gt;Exploiting Spatial Locality in Data Caches using Spatial Footprints&amp;lt;/i&amp;gt;, Princeton University and Microcomputer Research Labs (Oregon), 1998.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.1550&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[8] Jin, Shudong and Azer Bestavros, &amp;lt;i&amp;gt;Sources and Characteristics of Web Temporal Locality&amp;lt;/i&amp;gt;, Computer Science Depratment - Boston University, Boston. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.94.5941&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[9] Agarwal, Vikas &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Clock Rate versus IPS: The End of the Road for Conventional Microarhitechtures&amp;lt;/i&amp;gt;, University of Texas, Austin, 2000.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.3694&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[10] Tuomi, Ilkka, &amp;lt;i&amp;gt;The Lives and Death of Moore&#039;s Law&amp;lt;/i&amp;gt;, 2002.[http://131.193.153.231/www/issues/issue7_11/tuomi/ HTML]&lt;br /&gt;
&lt;br /&gt;
[11] BARHAM, P., DRAGOVIC, B., FRASER, K., HAND, S., HARRIS, T., HO, A., NEUGEBAUER, R., PRATT, I., AND WARFIELD, A. Xen and the art of virtualization. In Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP) (2003), pp. 164–177.&lt;br /&gt;
&lt;br /&gt;
[12] LARUS, J., AND PARKES, M. Using Cohort-Scheduling to Enhance Server Performance. In Proceedings of the annual conference on USENIX Annual Technical Conference (ATEC) (2002), pp. 103–114.&lt;br /&gt;
&lt;br /&gt;
[13] ARON, M., AND DRUSCHEL, P. Soft timers: efficient microsecond software timer support for network processing. ACM Trans. Comput. Syst. (TOCS) 18, 3 (2000), 197–228.&lt;br /&gt;
&lt;br /&gt;
[14] DRUSCHEL, P., AND BANGA, G. Lazy receiver processing (LRP): a network subsystem architecture for server systems. In Proceedings of the 2nd USENIX Symposium on Operating Systems Design and Implementation (OSDI) (1996), pp. 261–275.&lt;br /&gt;
&lt;br /&gt;
[15] CHAKRABORTY, K., WELLS, P. M., AND SOHI, G. S. Computation Spreading: Employing Hardware Migration to Specialize CMP Cores On-the-fly. In Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) (2006), pp. 283–292.&lt;br /&gt;
&lt;br /&gt;
[16] Vasudevan, Vijay. &amp;lt;i&amp;gt;Improving Datacenter Energy Efficiency Using a Fast Array of Wimpy Nodes&amp;lt;/i&amp;gt;, Thesis Proposal, Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, October 12, 2010.[http://www.cs.cmu.edu/~vrv/proposal/vijay_thesis_proposal.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[17] Patricia J. Teller &amp;lt;i&amp;gt;Translation-Lookaside Buffer Consistency&amp;lt;/i&amp;gt;, Journal Volume 23 Issue 6, IBM T. J. Watson Research Center, Yorktown Heights, NY, June 1990. [http://dx.doi.org/10.1109/2.55498 HTML]&lt;br /&gt;
&lt;br /&gt;
[18] Linux ABI sourceforge page. [http://linux-abi.sourceforge.net/ HTML] and Linux application page. [http://www.linux.org/apps/AppId_8088.html HTML]&lt;br /&gt;
&lt;br /&gt;
[19] DREPPER, U., AND MOLNAR , I. &amp;lt;i&amp;gt;The Native POSIX Thread Library for Linux&amp;lt;/i&amp;gt;. Tech. rep., RedHat Inc, 2003. [http://people.redhat.com/drepper/nptl-design.pdf HTML]&lt;br /&gt;
&lt;br /&gt;
[20] M. Brian Blake, &amp;lt;i&amp;gt;Coordinating Multiple Agents for Workflow-Oriented Process Orchestration&amp;lt;/i&amp;gt;. Information Systems and e-Business Management Journal, Springer-Verlag, December 2003. [http://www.cs.georgetown.edu/~blakeb/pubs/blake_ISEB2003.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[21] DeveloperWorks, Kernel Command using Linux System Calls IBM,2010.[http://public.dhe.ibm.com/software/dw/linux/l-system-calls/l-system-calls-pdf.pdf/ PDF]&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6183</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6183"/>
		<updated>2010-12-02T04:41:26Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay. It is essential for one to comprehend the basic concepts of the language that is spoken in the paper in order to fully understand the ideas that are being discussed. The most important notions discussed in the FlexSC paper that are at the core of it all, are System calls[21]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;, and Sychronous systems. These base definitons can be understood through the Background Concepts heading available in the next section of this paper. &lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
In order to fully understand the FlexSC paper, it is essential to understand the key concepts that are discussed within the paper. Here listed below, are the main concepts required to fully comprehend the paper. &lt;br /&gt;
&lt;br /&gt;
===System Call===&lt;br /&gt;
A &amp;lt;b&amp;gt;System Call&amp;lt;/b&amp;gt; is the gateway between the User Space and the Kernel Space. The User Space is not given direct access to the Kernel&#039;s services, for several reasons (one being security), hence System calls are the messengers between the User and Kernel Space.[1][4]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode Switch===&lt;br /&gt;
&amp;lt;b&amp;gt;Mode Switches&amp;lt;/b&amp;gt; speak of moving from one medium to another. Specifically moving from the User Space mode to the Kernel mode or Kernel mode to User Space. It does not matter which direction or which modes we are swtiching from, this is simply a general term. Crucial to mode switching is the &amp;lt;b&amp;gt;mode switch time&amp;lt;/b&amp;gt; which is the time necessary to execute a system call instruction in user-mode, perform the kernel mode execution of the system call, and finally return the execution back to user-mode.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Synchronous System Call===&lt;br /&gt;
&amp;lt;b&amp;gt;Synchronous Execution Model(System call Interface)&amp;lt;/b&amp;gt; refers to the structure in which system calls specifically are managed in a serialized manner. Moreover, the synchronous model completes one system call at a time, and does not move onto the next system call until the previous system call is finished executing. This form of system call is blocking, meaning the process which initiates the system call is blocked until the system call returns. Traditionally, operating system calls are mostly synchronous system calls.[1][2]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Asynchronous System Call===&lt;br /&gt;
An &amp;lt;b&amp;gt;asynchronous system call&amp;lt;/b&amp;gt; is a system call which does not block upon invocation; control of execution is returned to the calling process immediately. Asynchronous system calls do not necessarily execute in order and can be compared to event driven programming.[2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Pollution===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Pollution&amp;lt;/b&amp;gt; is a more sophisticated manner of referring to wasteful or un-necessary delay in the system caused by system calls. This pollution is in direct correlation with the fact that the system call invokes a mode switch which is not a costless task. The &amp;quot;pollution&amp;quot; involved takes the form of data over-written in critical processor structures like the TLB (translation look-aside buffer - table which reduces the frequency of main memory access for page table entries), branch prediction tables, and the cache (L1, L2, L3).[1][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Processor Exceptions===&lt;br /&gt;
&amp;lt;b&amp;gt;Processor exceptions&amp;lt;/b&amp;gt; are situations which cause the processor to stop current execution unexpectedly in order to handle the issue. There are many situations which generate processor exceptions including undefined instructions and software interrupts(system calls).[5]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Batching&amp;lt;/b&amp;gt; is the concept of collecting system calls together to be executed in a group instead of executing them immediately after they are called.[6]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Temporal and Spatial Locality===&lt;br /&gt;
Locality is the concept that during execution there will be a tendency for the same set of data to be accessed repeatedly over a brief time period. There are two important forms of locality; &amp;lt;b&amp;gt; spatial locality&amp;lt;/b&amp;gt; and &amp;lt;b&amp;gt;temporal locality&amp;lt;/b&amp;gt;. Spatial locality refers to the pattern that memory locations in close physical proximity will be referenced close together in a short period of time. Temporal locality, on the other hand, is the tendency of recently requested memory locations to be requested again.[7][8]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Instructions Per Cycle (IPC)===&lt;br /&gt;
&amp;lt;b&amp;gt;Instructions per cycle&amp;lt;/b&amp;gt; is the amount of instructions a processor can execute in a single clock cycle.[9]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
will add the following terms.&amp;lt;br&amp;gt;&lt;br /&gt;
TODO-Start --[[User:Tafatah|Tafatah]] 16:13, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I did some of these, some I don&#039;t think I can adequately explain, or just have no idea what they are, so I left them. --[[User:CFaibish|CFaibish]] 00:31, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Translation Look-Aside Buffer (TLB)===&lt;br /&gt;
A TLB is a table used in a virtual memory system that lists the physical address page number associated with each virtual address page number. A TLB is used in conjunction with a cache whose tags are based on virtual addresses. The virtual address is presented simultaneously to the TLB and to the cache so that cache access and the virtual-to-physical address translation can proceed in parallel. If the requested address is not cached then the physical address is used to locate the data in main memory. &lt;br /&gt;
&lt;br /&gt;
The TLB is the reason context switches can have such large performance penalties. Every time the OS switches context, the entire buffer is flushed. When the process resumes, it must be rebuilt from scratch. Too many context switches will therefore cause an increase in cache misses and degrade performance.[17]&lt;br /&gt;
&lt;br /&gt;
===Lack of Locality ===&lt;br /&gt;
As per paper, locality refers to both types of locality, i.e. temporal and spatial, defined above. Thus, lack of locality here means data and instructions needed most frequently by the application continues to be switched back and forth (from registers and caches) due to system calls, attributing hence, to performance degradation.&lt;br /&gt;
&lt;br /&gt;
===Throughput ===&lt;br /&gt;
Is an indication of how much work is done during a unit of time. E.g. n transactions per hour. The higher n is, the better. [2. P151]&lt;br /&gt;
&lt;br /&gt;
===Regular Store Instructions ===&lt;br /&gt;
A store instruction simply refers to a typical assembly language instruction, where, usually, there are two arguments. A value, and a memory location, where that value should be stored.&lt;br /&gt;
&lt;br /&gt;
===Linux Application Binary Interface (ABI)===&lt;br /&gt;
The ABI is a patch to the kernel that allows you to run SCO, Xenix, Solaris ix86, and other binaries on Linux.[18]&lt;br /&gt;
&lt;br /&gt;
===Native POSIX Thread Library (NPTL)===&lt;br /&gt;
NPTL is a software component that allows the Linux kernel to run applications optimized for POSIX Thread efficiency.[19]&lt;br /&gt;
&lt;br /&gt;
===Syscall Page ===&lt;br /&gt;
&lt;br /&gt;
===Syscall Threads ===&lt;br /&gt;
&lt;br /&gt;
===Inter-Process Interrupt ===&lt;br /&gt;
&lt;br /&gt;
===Latency ===&lt;br /&gt;
Latency is a measure of the time delay between the start of an action and its completion in a system.[20]&lt;br /&gt;
&lt;br /&gt;
===Producer-Consumer Problem ===&lt;br /&gt;
Note: explain the relationship &lt;br /&gt;
&lt;br /&gt;
TODO End --[[User:Tafatah|Tafatah]] 16:13, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
System calls provide an interface for user-mode applications to request services from the operating system. Traditionally, the system call interface has been implemented using synchronous system calls, which block the calling user-space process when the system call is initiated. The benefit of using synchronous system calls comes from the easy to program nature of having sequential operation. However, this ease of use also comes with undesireable side effects which can slow down the instructions per cycle (IPC) of the processor.[9] In &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, Soares and Stumm attempt to provide a new form of system call which minimizes the negative effects of synchronous system calls while still remaining easy to implement for application programmers.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The negative effects of synchronous system calls have been researched heavily, it is accepted that although easy to use, they are not optimal. Previous research includes work into &amp;lt;b&amp;gt;system call batching&amp;lt;/b&amp;gt; such as multi-calls[6], &amp;lt;b&amp;gt;locality of execution with multicore systems&amp;lt;/b&amp;gt;[7][8], and &amp;lt;b&amp;gt;non-blocking execution&amp;lt;/b&amp;gt;. System call batching shares great similarity with FlexSC as multiple system calls are grouped together to reduce the amount of mode switches required of the system.[6] The difference is multi-calls do not make use of parallel execution of system calls nor do they manage the blocking aspect of synchronous system calls. FlexSC describes methods to handle both of these situations as described in the &amp;lt;b&amp;gt;Contribution&amp;lt;/b&amp;gt; section of this document.[1] Previous research into locality of execution and multicore systems has focused on managing device interrupts and limiting processor interference associated with interrupt handling.[7][8] However, these solutions require a microkernel solution and although they can dedicate certain execution to specific cores of a system, they can not dynamically adapt to the proportion of cores used by the kernel and the cores shared between the kernel and the user like FlexSC can.[1] Non-blocking execution research has focused on threading, event-based (non-blocking) and hybrid solutions. However, FlexSC provides a mechanism to separate system call execution from system call invocation. This is a key difference between FlexSC and previous research.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
===Exception-Less System Calls===&lt;br /&gt;
Exception-less system calls are the research team&#039;s attempt to provide an alternative to synchronous systems calls. The downside to synchronous system calls includes the cumulative mode switch time of multiple system calls each called independently, state pollution of key processor structures (TLB, cache, etc.)[1][3], and, potentially the most crucial, the performance impact on the user-mode application during a system call. Exception-less system calls attempt to resolve these three issues through:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
1. &amp;lt;u&amp;gt;System Call Batching:&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Instead of having each system call run as soon as it is called, FlexSC instead groups together system calls into batches. These batches can then be executed at one time thus minimizing the     frequency of mode switches bewteen user and kernel modes. Batching provides a benefit both in terms of the direct cost of mode switching as well as the indirect cost, pollution of critical processor structures, associated with switching modes. System call batching works by first requesting as many system calls as possible, then switching to kernel mode, and then executing each of them.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2. &amp;lt;u&amp;gt;Core Specialization&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
On a multi-core system, FlexSC can provide the ability to designate a single core to run all system calls. The reason this is possible is that for an exception-less system call, the system call execution is decoupled from the system call invocation. This is described further in &amp;lt;b&amp;gt;Decoupling Execution from Invocation&amp;lt;/b&amp;gt; section below.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
3. &amp;lt;u&amp;gt;Exception-less System Call Interface&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
To provide an asynchronous interface to the kernel, FlexSC uses &amp;lt;b&amp;gt;syscall pages&amp;lt;/b&amp;gt;. Syscall pages are a set of memory pages shared between user-mode and kernel-mode. User-space threads interact with syscall pages in order to make a request (system call) for kernel-mode procedures. A user-mode thread may make a system call request on a free entry of a syscall page, the syscall page will then run once the batch condition is met and store the return value on the syscall page. The user-mode thread can then return to the syscall page to obtain the return value. Neither issuing the system call via the syscall page nor getting the return value from the syscall page generate a processor exception. Each syscall page is a table of syscall entries. These entries may have one of three states: &amp;lt;b&amp;gt;Free&amp;lt;/b&amp;gt; - meaning a syscall can be added to the entry; &amp;lt;b&amp;gt;Submitted&amp;lt;/b&amp;gt; - meaning the kernel can proceed to invoke the appropriate system call operations; and &amp;lt;b&amp;gt;Done&amp;lt;/b&amp;gt; - meaning the kernel is finished and the return value is ready for the user-mode thread to retrieve it.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
4. &amp;lt;u&amp;gt;Decoupling Execution from Invocation&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In order to separate a system call invocation from the execution of the system call, &amp;lt;b&amp;gt;syscall threads&amp;lt;/b&amp;gt; were created. The sole purpose of syscall threads is to pull requests from syscall pages and execute the request, always in kernel mode. This is the mechanic that allows exception-less system calls to provide the ability for a user-mode thread to issue a request and continue to run while the kernel level system call is being executed. In addition, since the system call invocation is separate from execution, a process running on one core may request a system call yet the execution of the system call may be completed on an entirely different core. This allows exception-less system calls the unique capability of having all system call execution delegated to a specific core while other cores maintain user-mode execution.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===FlexSC Threads===&lt;br /&gt;
As mentioned above, FlexSC threads are a key component of the exception-less system call interface. FlexSC threads transform regular, synchronous system calls into exception-less system calls and are compatible with both the POSIX and default Linux thread libraries. This means that FlexSC Threads are immediately capable of running multi-threaded Linux applications with no modifications. The intended use of these threads is with server-type applications which contain many user-mode threads. In order to accomodate multiple user-mode threads, the FlexSC interface provides a syscall page for each core of a system. In this manner, multiple user-mode threads can be multiplexed onto a single syscall page which in turn has a single kernel level thread to facilitate execution of the system calls. Programming with FlexSC threads can be compared to event-driven programming as interactions are not guaranteed to be sequential. This does increase the complexity of programming for an exception-less system call interface as compared to the relatively simple synchronous system call interface.[1][2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Critique: ==&lt;br /&gt;
&lt;br /&gt;
===Moore&#039;s Law===&lt;br /&gt;
One interesting aspect of this paper is how the research relates to Moore&#039;s Law. Moore&#039;s Law states that the number of transistors on a chip doubles every 18 months.[10]. This has lead to very large increases in the performance potential of software but at the same time has opened a large gap between the actual performance of efficient and inefficient software. This paper claims that the gap is mainly caused by disparity of accessing different processor resources such as registers, cache and memory.[1] In this manner, the FlexSC interface is not just an attempt to increase the efficiency of current system calls, but it is actually an attempt to change the way we view software. It is not simply enough to continue to build more powerful machines if the code we currently run will not speed up (become more efficient) along with the gain of power. Instead we need to focus on appropriate allocation and usage of the power as failure to do so is the origination of the gap between our potential and our performance.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Performance of FlexSC===&lt;br /&gt;
It is of particular interest to note that exception-less system calls only outperformed synchronous system calls when the system was running multiple system calls. For an individual system call, the overhead of the FlexSC interface was greater than a synchronous call. The real benefit of FlexSC comes when there are many system calls which can be in turn batched before execution. In this situation the FlexSC system far outperformed the traditional synchronous system calls.[1] This is why the research paper&#039;s focus is on server-like applications as server must handle many user requests efficiently to be useful. Thus, for a general case it appears that a hybrid solution of synchronous calls below some threshold and execption-less system calls above the same threshold would be most efficient.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Blocking Calls===&lt;br /&gt;
FlexSC relies on the fact that web and database servers have a lot of concurrency and independent parallelism. FlexSC can &#039;harvest&#039; enough independent work so that it doesn&#039;t need to track dependencies between system calls. However, this could be a problem in other situations. Since FlexSC system calls are &#039;inherently asynchronous&#039;, if they need to block, FlexSC would jump to the next system call and execute that one. This can cause a problem for system calls such as reading and writing, where the write call has an outstanding dependency on the read call. However, this could be resolved by using some kind of combined system call, that is, multiple system calls executed as one single call. Unfortunately, FlexSC does not have any current handling for such an implementation.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Core Scheduling Issues===&lt;br /&gt;
In a system with X cores, FlexSC needs to dedicate some subset of cores for system calls. Currently, FlexSC first wakes up core X to run a system call thread, and when another batch comes in, if core X is still busy, it will then try core X-1, and so on. Of all the algorithms they tested, it turned out that this, the simplest algorithm, was the most efficient algorithm for FlexSC scheduling. However, this was only tested with FlexSC running a single application at a time. FlexSC&#039;s scheduling algorithm would need to be fine-tuned for running multiple applications.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When There Are Not More Threads Then Cores===&lt;br /&gt;
In situations where there is a single thread using 100% of a CPU, and acting primarily in user-space, such as &#039;Scientific Programs&#039;, FlexSC causes more overhead then performance gains. As a result, FlexSC is not an optimal implementation for cases such as this.&lt;br /&gt;
&lt;br /&gt;
===IO === &lt;br /&gt;
FlexSC is not suited for data intensive, IO centric applications, as realized by Vijay Vasudevan [16]. Vijay&#039;s research aims to reduce the energy footprint in data centers. FlexSC&amp;lt;br&amp;gt;&lt;br /&gt;
was considered. It was found that FlexSC&#039;s reduction of mode switches, via the use of shared memory pages between user space and kernel space is useful for reducing the impact&amp;lt;br&amp;gt;&lt;br /&gt;
of system calls. That technique however was not useful for IO intensive work since it did not remove the requirement of data copying and did not reduce the overheads associated&amp;lt;br&amp;gt;&lt;br /&gt;
with interrupts in IO intensive tasks.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Some Kernel Changes Are Required===&lt;br /&gt;
Though most of the work is done transparently. i.e. there is no need for application&#039;s code modification, there remains a need for small kernel change (3 lines of code), as per section 3.2 of the paper [1].&amp;lt;br&amp;gt;&lt;br /&gt;
That means adopters, and after each update of the kernel, would have to add/modify the referenced lines and then recompile the kernel.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Multicore Systems ===&lt;br /&gt;
For a multicore system, the FlexSC scheduler will attempt to choose a subset of the available cores and specialize them for running system call threads. It is unclear how the dynamic&amp;lt;br&amp;gt;&lt;br /&gt;
allocation is done. It is mentioned that decisions are made based on the workload requirements, which doesn&#039;t exactly clarify the mechanism.&amp;lt;br&amp;gt;&lt;br /&gt;
Further, the paper mentions that a predefined, static list of cores is used for system call threads assignments. It is unclear when that list is created. Is it at installation time,&amp;lt;br&amp;gt;&lt;br /&gt;
is it generated initially, or does the installer have to do any manual work. On a related note, scalability with increased cores is ambiguous. It is not that clear how scalable the&amp;lt;br&amp;gt;&lt;br /&gt;
scheduler is. One gets the impression that it is very scalable due to the fact that each core spawns a system call thread. Thus, as many threads as there are cores could be running&amp;lt;br&amp;gt;&lt;br /&gt;
concurrently, for one or more processes [1]. More explicit results however would&#039;ve been beneficial. Further, the paper mentions that hyper-threading was turned off to ease the analysis&amp;lt;br&amp;gt;&lt;br /&gt;
of the results. Understandable, however, it would be nice to know if these threads (2 per core) would actually be treated as a core when turned on ? I.e. would the scheduler then realize&amp;lt;br&amp;gt;&lt;br /&gt;
that it can use eight cores ? Does that also mean the predefined static cores list would need to be modified, to list eight instead of four ?&amp;lt;br&amp;gt;  &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
Along the same reasoning, and given the growing popularity of GPU&#039;s use for general programming, it would&#039;ve been useful to at-least hypothesize on the possible performance&amp;lt;br&amp;gt;&lt;br /&gt;
outcome when using specialized GPUs, like NVIDIA&#039;s Tesla GPUs for example. Would FlexSC&#039;s scheduler be able to take advantage of the additional cores, and hence use them for&amp;lt;br&amp;gt;&lt;br /&gt;
specialized purposes ?&lt;br /&gt;
&lt;br /&gt;
== Related Work: ==&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&lt;br /&gt;
Muti-calls is a concept which involves collecting multiple system calls and submitting them as a single system call. It is used both in operating systems and paravirtualized hypervisors. The Cassyopia compiler has a special technique name a looped multi-call, which is an additional process where the result of one system call can be fed as an argument to another system call in the same multi-call.[11] There is a significant difference between multi-calls and exception-less system calls. Multi-calls do not investigate parallel execution of system calls, nor do they address the blocking of system calls like exception-less system calls. Multi-call system calls are executed sequentially, each one must complete before the next may start. On the other hand, exception-less system calls can be executed in parallel, and in the presence of blocking, the next call can execute immediately.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Locality of Execution and Multicores===&lt;br /&gt;
&lt;br /&gt;
Several techniques addressed the issue of locality of execution. Larus and Parkes proposed Cohort Scheduling to efficiently execute staged computations.[12] Other techniques include Soft Timers[13] and Lazy Receiver[14] Processing which try to tackle the issue of locality of execution by handling device interrupts. They both try to limit processor interference associated with interrupt handling without affecting the latency of servicing requests. Another technique name Computation Spreading[15] is most similar to the multicore execution of FlexSC. Processor modifications that allow hardware migration of threads and migration to specialized cores. However, they did not model TLBs on current hardware synchronous thread migration is a costly interprocessor interrupt. Another solution has 2 difference between FlexSC. They require a micro-kernel. Also FlexSC can dynamically adapt the proportion of cores used by the kernel or cores shared by user and kernel execution dynamically. While all these solutions rely on expensive inter-processor interrupts to offload system calls, FlexSC could provide a more efficient, and flexible mechanism.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Non-blocking Execution===&lt;br /&gt;
&lt;br /&gt;
Past research on improving system call performance has focused extensively on blocking versus non-blocking behavior. Typically researchers used threading, event-based, which is non-blocking and hybrid systems to obtain high performance on server applications. The main difference between many of the proposals for non-blocking execution and FlexSC is that none of the non-blocking system calls have decoupled the system call invocation from its execution.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References: ==&lt;br /&gt;
[1] Soares, Livio and Michael Stumm, &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, University of Toronto, 2010.[http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Tanenbaum, Andrew S., &amp;lt;i&amp;gt;Modern Operating Systems: 3rd Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2008.&lt;br /&gt;
&lt;br /&gt;
[3] Stallings, William, &amp;lt;i&amp;gt;Operating Systems: Internals and Design Principles - 6th Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2009.&lt;br /&gt;
&lt;br /&gt;
[4] Garfinkel, Tim, &amp;lt;i&amp;gt;Traps and Pitfalls: Practical Problems in System Call Interposition Based Security Tools&amp;lt;/i&amp;gt;, Computer Science Department - Stanford University.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.144.2695&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[5] Yoo, Sunjoo &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Automatic Generation of Fast Timed Simulation Models for Operating Systems in SoC Design&amp;lt;/i&amp;gt;, SLS Group, TIMA Laboratory, Grenoble, 2002.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.1148&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[6] Rajagopalan, Mohan &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Cassyopia: Compiler Assisted System Optimization&amp;lt;/i&amp;gt;, Poceedings of HotOS IX: The 9th Workshop on Hot Topics in Operating Systems, Lihue, Hawaii, 2003.[https://www.usenix.org/events/hotos03/tech/full_papers/rajagopalan/rajagopalan.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[7] Kumar, Sanjeev and Christopher Wilkerson, &amp;lt;i&amp;gt;Exploiting Spatial Locality in Data Caches using Spatial Footprints&amp;lt;/i&amp;gt;, Princeton University and Microcomputer Research Labs (Oregon), 1998.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.1550&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[8] Jin, Shudong and Azer Bestavros, &amp;lt;i&amp;gt;Sources and Characteristics of Web Temporal Locality&amp;lt;/i&amp;gt;, Computer Science Depratment - Boston University, Boston. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.94.5941&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[9] Agarwal, Vikas &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Clock Rate versus IPS: The End of the Road for Conventional Microarhitechtures&amp;lt;/i&amp;gt;, University of Texas, Austin, 2000.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.3694&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[10] Tuomi, Ilkka, &amp;lt;i&amp;gt;The Lives and Death of Moore&#039;s Law&amp;lt;/i&amp;gt;, 2002.[http://131.193.153.231/www/issues/issue7_11/tuomi/ HTML]&lt;br /&gt;
&lt;br /&gt;
[11] BARHAM, P., DRAGOVIC, B., FRASER, K., HAND, S., HARRIS, T., HO, A., NEUGEBAUER, R., PRATT, I., AND WARFIELD, A. Xen and the art of virtualization. In Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP) (2003), pp. 164–177.&lt;br /&gt;
&lt;br /&gt;
[12] LARUS, J., AND PARKES, M. Using Cohort-Scheduling to Enhance Server Performance. In Proceedings of the annual conference on USENIX Annual Technical Conference (ATEC) (2002), pp. 103–114.&lt;br /&gt;
&lt;br /&gt;
[13] ARON, M., AND DRUSCHEL, P. Soft timers: efficient microsecond software timer support for network processing. ACM Trans. Comput. Syst. (TOCS) 18, 3 (2000), 197–228.&lt;br /&gt;
&lt;br /&gt;
[14] DRUSCHEL, P., AND BANGA, G. Lazy receiver processing (LRP): a network subsystem architecture for server systems. In Proceedings of the 2nd USENIX Symposium on Operating Systems Design and Implementation (OSDI) (1996), pp. 261–275.&lt;br /&gt;
&lt;br /&gt;
[15] CHAKRABORTY, K., WELLS, P. M., AND SOHI, G. S. Computation Spreading: Employing Hardware Migration to Specialize CMP Cores On-the-fly. In Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) (2006), pp. 283–292.&lt;br /&gt;
&lt;br /&gt;
[16] Vasudevan, Vijay. &amp;lt;i&amp;gt;Improving Datacenter Energy Efficiency Using a Fast Array of Wimpy Nodes&amp;lt;/i&amp;gt;, Thesis Proposal, Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, October 12, 2010.[http://www.cs.cmu.edu/~vrv/proposal/vijay_thesis_proposal.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[17] Patricia J. Teller &amp;lt;i&amp;gt;Translation-Lookaside Buffer Consistency&amp;lt;/i&amp;gt;, Journal Volume 23 Issue 6, IBM T. J. Watson Research Center, Yorktown Heights, NY, June 1990. [http://dx.doi.org/10.1109/2.55498 HTML]&lt;br /&gt;
&lt;br /&gt;
[18] Linux ABI sourceforge page. [http://linux-abi.sourceforge.net/ HTML] and Linux application page. [http://www.linux.org/apps/AppId_8088.html HTML]&lt;br /&gt;
&lt;br /&gt;
[19] DREPPER, U., AND MOLNAR , I. &amp;lt;i&amp;gt;The Native POSIX Thread Library for Linux&amp;lt;/i&amp;gt;. Tech. rep., RedHat Inc, 2003. [http://people.redhat.com/drepper/nptl-design.pdf HTML]&lt;br /&gt;
&lt;br /&gt;
[20] M. Brian Blake, &amp;lt;i&amp;gt;Coordinating Multiple Agents for Workflow-Oriented Process Orchestration&amp;lt;/i&amp;gt;. Information Systems and e-Business Management Journal, Springer-Verlag, December 2003. [http://www.cs.georgetown.edu/~blakeb/pubs/blake_ISEB2003.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[21] DeveloperWorks, IBM. Kernel Command using Linux System Calls[http://www.ibm.com/developerworks/linux/library/l-system-calls/]&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6182</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=6182"/>
		<updated>2010-12-02T04:36:03Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay. It is essential for one to comprehend the basic concepts of the language that is spoken in the paper in order to fully understand the ideas that are being discussed. The most important notions discussed in the FlexSC paper that are at the core of it all, are System calls, and Sychronous systems. These base definitons can be understood through the Background Concepts heading available in the next section of this paper. &lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
In order to fully understand the FlexSC paper, it is essential to understand the key concepts that are discussed within the paper. Here listed below, are the main concepts required to fully comprehend the paper. &lt;br /&gt;
&lt;br /&gt;
===System Call===&lt;br /&gt;
A &amp;lt;b&amp;gt;System Call&amp;lt;/b&amp;gt; is the gateway between the User Space and the Kernel Space. The User Space is not given direct access to the Kernel&#039;s services, for several reasons (one being security), hence System calls are the messengers between the User and Kernel Space.[1][4]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Mode Switch===&lt;br /&gt;
&amp;lt;b&amp;gt;Mode Switches&amp;lt;/b&amp;gt; speak of moving from one medium to another. Specifically moving from the User Space mode to the Kernel mode or Kernel mode to User Space. It does not matter which direction or which modes we are swtiching from, this is simply a general term. Crucial to mode switching is the &amp;lt;b&amp;gt;mode switch time&amp;lt;/b&amp;gt; which is the time necessary to execute a system call instruction in user-mode, perform the kernel mode execution of the system call, and finally return the execution back to user-mode.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Synchronous System Call===&lt;br /&gt;
&amp;lt;b&amp;gt;Synchronous Execution Model(System call Interface)&amp;lt;/b&amp;gt; refers to the structure in which system calls specifically are managed in a serialized manner. Moreover, the synchronous model completes one system call at a time, and does not move onto the next system call until the previous system call is finished executing. This form of system call is blocking, meaning the process which initiates the system call is blocked until the system call returns. Traditionally, operating system calls are mostly synchronous system calls.[1][2]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Asynchronous System Call===&lt;br /&gt;
An &amp;lt;b&amp;gt;asynchronous system call&amp;lt;/b&amp;gt; is a system call which does not block upon invocation; control of execution is returned to the calling process immediately. Asynchronous system calls do not necessarily execute in order and can be compared to event driven programming.[2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Pollution===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Pollution&amp;lt;/b&amp;gt; is a more sophisticated manner of referring to wasteful or un-necessary delay in the system caused by system calls. This pollution is in direct correlation with the fact that the system call invokes a mode switch which is not a costless task. The &amp;quot;pollution&amp;quot; involved takes the form of data over-written in critical processor structures like the TLB (translation look-aside buffer - table which reduces the frequency of main memory access for page table entries), branch prediction tables, and the cache (L1, L2, L3).[1][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Processor Exceptions===&lt;br /&gt;
&amp;lt;b&amp;gt;Processor exceptions&amp;lt;/b&amp;gt; are situations which cause the processor to stop current execution unexpectedly in order to handle the issue. There are many situations which generate processor exceptions including undefined instructions and software interrupts(system calls).[5]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&amp;lt;b&amp;gt;System Call Batching&amp;lt;/b&amp;gt; is the concept of collecting system calls together to be executed in a group instead of executing them immediately after they are called.[6]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Temporal and Spatial Locality===&lt;br /&gt;
Locality is the concept that during execution there will be a tendency for the same set of data to be accessed repeatedly over a brief time period. There are two important forms of locality; &amp;lt;b&amp;gt; spatial locality&amp;lt;/b&amp;gt; and &amp;lt;b&amp;gt;temporal locality&amp;lt;/b&amp;gt;. Spatial locality refers to the pattern that memory locations in close physical proximity will be referenced close together in a short period of time. Temporal locality, on the other hand, is the tendency of recently requested memory locations to be requested again.[7][8]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Instructions Per Cycle (IPC)===&lt;br /&gt;
&amp;lt;b&amp;gt;Instructions per cycle&amp;lt;/b&amp;gt; is the amount of instructions a processor can execute in a single clock cycle.[9]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
will add the following terms.&amp;lt;br&amp;gt;&lt;br /&gt;
TODO-Start --[[User:Tafatah|Tafatah]] 16:13, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I did some of these, some I don&#039;t think I can adequately explain, or just have no idea what they are, so I left them. --[[User:CFaibish|CFaibish]] 00:31, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
===Translation Look-Aside Buffer (TLB)===&lt;br /&gt;
A TLB is a table used in a virtual memory system that lists the physical address page number associated with each virtual address page number. A TLB is used in conjunction with a cache whose tags are based on virtual addresses. The virtual address is presented simultaneously to the TLB and to the cache so that cache access and the virtual-to-physical address translation can proceed in parallel. If the requested address is not cached then the physical address is used to locate the data in main memory. &lt;br /&gt;
&lt;br /&gt;
The TLB is the reason context switches can have such large performance penalties. Every time the OS switches context, the entire buffer is flushed. When the process resumes, it must be rebuilt from scratch. Too many context switches will therefore cause an increase in cache misses and degrade performance.[17]&lt;br /&gt;
&lt;br /&gt;
===Lack of Locality ===&lt;br /&gt;
As per paper, locality refers to both types of locality, i.e. temporal and spatial, defined above. Thus, lack of locality here means data and instructions needed most frequently by the application continues to be switched back and forth (from registers and caches) due to system calls, attributing hence, to performance degradation.&lt;br /&gt;
&lt;br /&gt;
===Throughput ===&lt;br /&gt;
Is an indication of how much work is done during a unit of time. E.g. n transactions per hour. The higher n is, the better. [2. P151]&lt;br /&gt;
&lt;br /&gt;
===Regular Store Instructions ===&lt;br /&gt;
A store instruction simply refers to a typical assembly language instruction, where, usually, there are two arguments. A value, and a memory location, where that value should be stored.&lt;br /&gt;
&lt;br /&gt;
===Linux Application Binary Interface (ABI)===&lt;br /&gt;
The ABI is a patch to the kernel that allows you to run SCO, Xenix, Solaris ix86, and other binaries on Linux.[18]&lt;br /&gt;
&lt;br /&gt;
===Native POSIX Thread Library (NPTL)===&lt;br /&gt;
NPTL is a software component that allows the Linux kernel to run applications optimized for POSIX Thread efficiency.[19]&lt;br /&gt;
&lt;br /&gt;
===Syscall Page ===&lt;br /&gt;
&lt;br /&gt;
===Syscall Threads ===&lt;br /&gt;
&lt;br /&gt;
===Inter-Process Interrupt ===&lt;br /&gt;
&lt;br /&gt;
===Latency ===&lt;br /&gt;
Latency is a measure of the time delay between the start of an action and its completion in a system.[20]&lt;br /&gt;
&lt;br /&gt;
===Producer-Consumer Problem ===&lt;br /&gt;
Note: explain the relationship &lt;br /&gt;
&lt;br /&gt;
TODO End --[[User:Tafatah|Tafatah]] 16:13, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
System calls provide an interface for user-mode applications to request services from the operating system. Traditionally, the system call interface has been implemented using synchronous system calls, which block the calling user-space process when the system call is initiated. The benefit of using synchronous system calls comes from the easy to program nature of having sequential operation. However, this ease of use also comes with undesireable side effects which can slow down the instructions per cycle (IPC) of the processor.[9] In &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, Soares and Stumm attempt to provide a new form of system call which minimizes the negative effects of synchronous system calls while still remaining easy to implement for application programmers.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The negative effects of synchronous system calls have been researched heavily, it is accepted that although easy to use, they are not optimal. Previous research includes work into &amp;lt;b&amp;gt;system call batching&amp;lt;/b&amp;gt; such as multi-calls[6], &amp;lt;b&amp;gt;locality of execution with multicore systems&amp;lt;/b&amp;gt;[7][8], and &amp;lt;b&amp;gt;non-blocking execution&amp;lt;/b&amp;gt;. System call batching shares great similarity with FlexSC as multiple system calls are grouped together to reduce the amount of mode switches required of the system.[6] The difference is multi-calls do not make use of parallel execution of system calls nor do they manage the blocking aspect of synchronous system calls. FlexSC describes methods to handle both of these situations as described in the &amp;lt;b&amp;gt;Contribution&amp;lt;/b&amp;gt; section of this document.[1] Previous research into locality of execution and multicore systems has focused on managing device interrupts and limiting processor interference associated with interrupt handling.[7][8] However, these solutions require a microkernel solution and although they can dedicate certain execution to specific cores of a system, they can not dynamically adapt to the proportion of cores used by the kernel and the cores shared between the kernel and the user like FlexSC can.[1] Non-blocking execution research has focused on threading, event-based (non-blocking) and hybrid solutions. However, FlexSC provides a mechanism to separate system call execution from system call invocation. This is a key difference between FlexSC and previous research.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
===Exception-Less System Calls===&lt;br /&gt;
Exception-less system calls are the research team&#039;s attempt to provide an alternative to synchronous systems calls. The downside to synchronous system calls includes the cumulative mode switch time of multiple system calls each called independently, state pollution of key processor structures (TLB, cache, etc.)[1][3], and, potentially the most crucial, the performance impact on the user-mode application during a system call. Exception-less system calls attempt to resolve these three issues through:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
1. &amp;lt;u&amp;gt;System Call Batching:&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Instead of having each system call run as soon as it is called, FlexSC instead groups together system calls into batches. These batches can then be executed at one time thus minimizing the     frequency of mode switches bewteen user and kernel modes. Batching provides a benefit both in terms of the direct cost of mode switching as well as the indirect cost, pollution of critical processor structures, associated with switching modes. System call batching works by first requesting as many system calls as possible, then switching to kernel mode, and then executing each of them.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2. &amp;lt;u&amp;gt;Core Specialization&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
On a multi-core system, FlexSC can provide the ability to designate a single core to run all system calls. The reason this is possible is that for an exception-less system call, the system call execution is decoupled from the system call invocation. This is described further in &amp;lt;b&amp;gt;Decoupling Execution from Invocation&amp;lt;/b&amp;gt; section below.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
3. &amp;lt;u&amp;gt;Exception-less System Call Interface&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
To provide an asynchronous interface to the kernel, FlexSC uses &amp;lt;b&amp;gt;syscall pages&amp;lt;/b&amp;gt;. Syscall pages are a set of memory pages shared between user-mode and kernel-mode. User-space threads interact with syscall pages in order to make a request (system call) for kernel-mode procedures. A user-mode thread may make a system call request on a free entry of a syscall page, the syscall page will then run once the batch condition is met and store the return value on the syscall page. The user-mode thread can then return to the syscall page to obtain the return value. Neither issuing the system call via the syscall page nor getting the return value from the syscall page generate a processor exception. Each syscall page is a table of syscall entries. These entries may have one of three states: &amp;lt;b&amp;gt;Free&amp;lt;/b&amp;gt; - meaning a syscall can be added to the entry; &amp;lt;b&amp;gt;Submitted&amp;lt;/b&amp;gt; - meaning the kernel can proceed to invoke the appropriate system call operations; and &amp;lt;b&amp;gt;Done&amp;lt;/b&amp;gt; - meaning the kernel is finished and the return value is ready for the user-mode thread to retrieve it.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
4. &amp;lt;u&amp;gt;Decoupling Execution from Invocation&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In order to separate a system call invocation from the execution of the system call, &amp;lt;b&amp;gt;syscall threads&amp;lt;/b&amp;gt; were created. The sole purpose of syscall threads is to pull requests from syscall pages and execute the request, always in kernel mode. This is the mechanic that allows exception-less system calls to provide the ability for a user-mode thread to issue a request and continue to run while the kernel level system call is being executed. In addition, since the system call invocation is separate from execution, a process running on one core may request a system call yet the execution of the system call may be completed on an entirely different core. This allows exception-less system calls the unique capability of having all system call execution delegated to a specific core while other cores maintain user-mode execution.[1]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===FlexSC Threads===&lt;br /&gt;
As mentioned above, FlexSC threads are a key component of the exception-less system call interface. FlexSC threads transform regular, synchronous system calls into exception-less system calls and are compatible with both the POSIX and default Linux thread libraries. This means that FlexSC Threads are immediately capable of running multi-threaded Linux applications with no modifications. The intended use of these threads is with server-type applications which contain many user-mode threads. In order to accomodate multiple user-mode threads, the FlexSC interface provides a syscall page for each core of a system. In this manner, multiple user-mode threads can be multiplexed onto a single syscall page which in turn has a single kernel level thread to facilitate execution of the system calls. Programming with FlexSC threads can be compared to event-driven programming as interactions are not guaranteed to be sequential. This does increase the complexity of programming for an exception-less system call interface as compared to the relatively simple synchronous system call interface.[1][2][3]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Critique: ==&lt;br /&gt;
&lt;br /&gt;
===Moore&#039;s Law===&lt;br /&gt;
One interesting aspect of this paper is how the research relates to Moore&#039;s Law. Moore&#039;s Law states that the number of transistors on a chip doubles every 18 months.[10]. This has lead to very large increases in the performance potential of software but at the same time has opened a large gap between the actual performance of efficient and inefficient software. This paper claims that the gap is mainly caused by disparity of accessing different processor resources such as registers, cache and memory.[1] In this manner, the FlexSC interface is not just an attempt to increase the efficiency of current system calls, but it is actually an attempt to change the way we view software. It is not simply enough to continue to build more powerful machines if the code we currently run will not speed up (become more efficient) along with the gain of power. Instead we need to focus on appropriate allocation and usage of the power as failure to do so is the origination of the gap between our potential and our performance.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Performance of FlexSC===&lt;br /&gt;
It is of particular interest to note that exception-less system calls only outperformed synchronous system calls when the system was running multiple system calls. For an individual system call, the overhead of the FlexSC interface was greater than a synchronous call. The real benefit of FlexSC comes when there are many system calls which can be in turn batched before execution. In this situation the FlexSC system far outperformed the traditional synchronous system calls.[1] This is why the research paper&#039;s focus is on server-like applications as server must handle many user requests efficiently to be useful. Thus, for a general case it appears that a hybrid solution of synchronous calls below some threshold and execption-less system calls above the same threshold would be most efficient.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Blocking Calls===&lt;br /&gt;
FlexSC relies on the fact that web and database servers have a lot of concurrency and independent parallelism. FlexSC can &#039;harvest&#039; enough independent work so that it doesn&#039;t need to track dependencies between system calls. However, this could be a problem in other situations. Since FlexSC system calls are &#039;inherently asynchronous&#039;, if they need to block, FlexSC would jump to the next system call and execute that one. This can cause a problem for system calls such as reading and writing, where the write call has an outstanding dependency on the read call. However, this could be resolved by using some kind of combined system call, that is, multiple system calls executed as one single call. Unfortunately, FlexSC does not have any current handling for such an implementation.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Core Scheduling Issues===&lt;br /&gt;
In a system with X cores, FlexSC needs to dedicate some subset of cores for system calls. Currently, FlexSC first wakes up core X to run a system call thread, and when another batch comes in, if core X is still busy, it will then try core X-1, and so on. Of all the algorithms they tested, it turned out that this, the simplest algorithm, was the most efficient algorithm for FlexSC scheduling. However, this was only tested with FlexSC running a single application at a time. FlexSC&#039;s scheduling algorithm would need to be fine-tuned for running multiple applications.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When There Are Not More Threads Then Cores===&lt;br /&gt;
In situations where there is a single thread using 100% of a CPU, and acting primarily in user-space, such as &#039;Scientific Programs&#039;, FlexSC causes more overhead then performance gains. As a result, FlexSC is not an optimal implementation for cases such as this.&lt;br /&gt;
&lt;br /&gt;
===IO === &lt;br /&gt;
FlexSC is not suited for data intensive, IO centric applications, as realized by Vijay Vasudevan [16]. Vijay&#039;s research aims to reduce the energy footprint in data centers. FlexSC&amp;lt;br&amp;gt;&lt;br /&gt;
was considered. It was found that FlexSC&#039;s reduction of mode switches, via the use of shared memory pages between user space and kernel space is useful for reducing the impact&amp;lt;br&amp;gt;&lt;br /&gt;
of system calls. That technique however was not useful for IO intensive work since it did not remove the requirement of data copying and did not reduce the overheads associated&amp;lt;br&amp;gt;&lt;br /&gt;
with interrupts in IO intensive tasks.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Some Kernel Changes Are Required===&lt;br /&gt;
Though most of the work is done transparently. i.e. there is no need for application&#039;s code modification, there remains a need for small kernel change (3 lines of code), as per section 3.2 of the paper [1].&amp;lt;br&amp;gt;&lt;br /&gt;
That means adopters, and after each update of the kernel, would have to add/modify the referenced lines and then recompile the kernel.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Multicore Systems ===&lt;br /&gt;
For a multicore system, the FlexSC scheduler will attempt to choose a subset of the available cores and specialize them for running system call threads. It is unclear how the dynamic&amp;lt;br&amp;gt;&lt;br /&gt;
allocation is done. It is mentioned that decisions are made based on the workload requirements, which doesn&#039;t exactly clarify the mechanism.&amp;lt;br&amp;gt;&lt;br /&gt;
Further, the paper mentions that a predefined, static list of cores is used for system call threads assignments. It is unclear when that list is created. Is it at installation time,&amp;lt;br&amp;gt;&lt;br /&gt;
is it generated initially, or does the installer have to do any manual work. On a related note, scalability with increased cores is ambiguous. It is not that clear how scalable the&amp;lt;br&amp;gt;&lt;br /&gt;
scheduler is. One gets the impression that it is very scalable due to the fact that each core spawns a system call thread. Thus, as many threads as there are cores could be running&amp;lt;br&amp;gt;&lt;br /&gt;
concurrently, for one or more processes [1]. More explicit results however would&#039;ve been beneficial. Further, the paper mentions that hyper-threading was turned off to ease the analysis&amp;lt;br&amp;gt;&lt;br /&gt;
of the results. Understandable, however, it would be nice to know if these threads (2 per core) would actually be treated as a core when turned on ? I.e. would the scheduler then realize&amp;lt;br&amp;gt;&lt;br /&gt;
that it can use eight cores ? Does that also mean the predefined static cores list would need to be modified, to list eight instead of four ?&amp;lt;br&amp;gt;  &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
Along the same reasoning, and given the growing popularity of GPU&#039;s use for general programming, it would&#039;ve been useful to at-least hypothesize on the possible performance&amp;lt;br&amp;gt;&lt;br /&gt;
outcome when using specialized GPUs, like NVIDIA&#039;s Tesla GPUs for example. Would FlexSC&#039;s scheduler be able to take advantage of the additional cores, and hence use them for&amp;lt;br&amp;gt;&lt;br /&gt;
specialized purposes ?&lt;br /&gt;
&lt;br /&gt;
== Related Work: ==&lt;br /&gt;
&lt;br /&gt;
===System Call Batching===&lt;br /&gt;
&lt;br /&gt;
Muti-calls is a concept which involves collecting multiple system calls and submitting them as a single system call. It is used both in operating systems and paravirtualized hypervisors. The Cassyopia compiler has a special technique name a looped multi-call, which is an additional process where the result of one system call can be fed as an argument to another system call in the same multi-call.[11] There is a significant difference between multi-calls and exception-less system calls. Multi-calls do not investigate parallel execution of system calls, nor do they address the blocking of system calls like exception-less system calls. Multi-call system calls are executed sequentially, each one must complete before the next may start. On the other hand, exception-less system calls can be executed in parallel, and in the presence of blocking, the next call can execute immediately.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Locality of Execution and Multicores===&lt;br /&gt;
&lt;br /&gt;
Several techniques addressed the issue of locality of execution. Larus and Parkes proposed Cohort Scheduling to efficiently execute staged computations.[12] Other techniques include Soft Timers[13] and Lazy Receiver[14] Processing which try to tackle the issue of locality of execution by handling device interrupts. They both try to limit processor interference associated with interrupt handling without affecting the latency of servicing requests. Another technique name Computation Spreading[15] is most similar to the multicore execution of FlexSC. Processor modifications that allow hardware migration of threads and migration to specialized cores. However, they did not model TLBs on current hardware synchronous thread migration is a costly interprocessor interrupt. Another solution has 2 difference between FlexSC. They require a micro-kernel. Also FlexSC can dynamically adapt the proportion of cores used by the kernel or cores shared by user and kernel execution dynamically. While all these solutions rely on expensive inter-processor interrupts to offload system calls, FlexSC could provide a more efficient, and flexible mechanism.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Non-blocking Execution===&lt;br /&gt;
&lt;br /&gt;
Past research on improving system call performance has focused extensively on blocking versus non-blocking behavior. Typically researchers used threading, event-based, which is non-blocking and hybrid systems to obtain high performance on server applications. The main difference between many of the proposals for non-blocking execution and FlexSC is that none of the non-blocking system calls have decoupled the system call invocation from its execution.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References: ==&lt;br /&gt;
[1] Soares, Livio and Michael Stumm, &amp;lt;i&amp;gt;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;lt;/i&amp;gt;, University of Toronto, 2010.[http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Tanenbaum, Andrew S., &amp;lt;i&amp;gt;Modern Operating Systems: 3rd Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2008.&lt;br /&gt;
&lt;br /&gt;
[3] Stallings, William, &amp;lt;i&amp;gt;Operating Systems: Internals and Design Principles - 6th Edition&amp;lt;/i&amp;gt;, Pearson/Prentice Hall, New Jersey, 2009.&lt;br /&gt;
&lt;br /&gt;
[4] Garfinkel, Tim, &amp;lt;i&amp;gt;Traps and Pitfalls: Practical Problems in System Call Interposition Based Security Tools&amp;lt;/i&amp;gt;, Computer Science Department - Stanford University.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.144.2695&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[5] Yoo, Sunjoo &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Automatic Generation of Fast Timed Simulation Models for Operating Systems in SoC Design&amp;lt;/i&amp;gt;, SLS Group, TIMA Laboratory, Grenoble, 2002.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.1148&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[6] Rajagopalan, Mohan &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Cassyopia: Compiler Assisted System Optimization&amp;lt;/i&amp;gt;, Poceedings of HotOS IX: The 9th Workshop on Hot Topics in Operating Systems, Lihue, Hawaii, 2003.[https://www.usenix.org/events/hotos03/tech/full_papers/rajagopalan/rajagopalan.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[7] Kumar, Sanjeev and Christopher Wilkerson, &amp;lt;i&amp;gt;Exploiting Spatial Locality in Data Caches using Spatial Footprints&amp;lt;/i&amp;gt;, Princeton University and Microcomputer Research Labs (Oregon), 1998.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.1550&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[8] Jin, Shudong and Azer Bestavros, &amp;lt;i&amp;gt;Sources and Characteristics of Web Temporal Locality&amp;lt;/i&amp;gt;, Computer Science Depratment - Boston University, Boston. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.94.5941&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[9] Agarwal, Vikas &amp;lt;i&amp;gt;et al.&amp;lt;/i&amp;gt;, &amp;lt;i&amp;gt;Clock Rate versus IPS: The End of the Road for Conventional Microarhitechtures&amp;lt;/i&amp;gt;, University of Texas, Austin, 2000.[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.3694&amp;amp;rep=rep1&amp;amp;type=pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[10] Tuomi, Ilkka, &amp;lt;i&amp;gt;The Lives and Death of Moore&#039;s Law&amp;lt;/i&amp;gt;, 2002.[http://131.193.153.231/www/issues/issue7_11/tuomi/ HTML]&lt;br /&gt;
&lt;br /&gt;
[11] BARHAM, P., DRAGOVIC, B., FRASER, K., HAND, S., HARRIS, T., HO, A., NEUGEBAUER, R., PRATT, I., AND WARFIELD, A. Xen and the art of virtualization. In Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP) (2003), pp. 164–177.&lt;br /&gt;
&lt;br /&gt;
[12] LARUS, J., AND PARKES, M. Using Cohort-Scheduling to Enhance Server Performance. In Proceedings of the annual conference on USENIX Annual Technical Conference (ATEC) (2002), pp. 103–114.&lt;br /&gt;
&lt;br /&gt;
[13] ARON, M., AND DRUSCHEL, P. Soft timers: efficient microsecond software timer support for network processing. ACM Trans. Comput. Syst. (TOCS) 18, 3 (2000), 197–228.&lt;br /&gt;
&lt;br /&gt;
[14] DRUSCHEL, P., AND BANGA, G. Lazy receiver processing (LRP): a network subsystem architecture for server systems. In Proceedings of the 2nd USENIX Symposium on Operating Systems Design and Implementation (OSDI) (1996), pp. 261–275.&lt;br /&gt;
&lt;br /&gt;
[15] CHAKRABORTY, K., WELLS, P. M., AND SOHI, G. S. Computation Spreading: Employing Hardware Migration to Specialize CMP Cores On-the-fly. In Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) (2006), pp. 283–292.&lt;br /&gt;
&lt;br /&gt;
[16] Vasudevan, Vijay. &amp;lt;i&amp;gt;Improving Datacenter Energy Efficiency Using a Fast Array of Wimpy Nodes&amp;lt;/i&amp;gt;, Thesis Proposal, Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, October 12, 2010.[http://www.cs.cmu.edu/~vrv/proposal/vijay_thesis_proposal.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[17] Patricia J. Teller &amp;lt;i&amp;gt;Translation-Lookaside Buffer Consistency&amp;lt;/i&amp;gt;, Journal Volume 23 Issue 6, IBM T. J. Watson Research Center, Yorktown Heights, NY, June 1990. [http://dx.doi.org/10.1109/2.55498 HTML]&lt;br /&gt;
&lt;br /&gt;
[18] Linux ABI sourceforge page. [http://linux-abi.sourceforge.net/ HTML] and Linux application page. [http://www.linux.org/apps/AppId_8088.html HTML]&lt;br /&gt;
&lt;br /&gt;
[19] DREPPER, U., AND MOLNAR , I. &amp;lt;i&amp;gt;The Native POSIX Thread Library for Linux&amp;lt;/i&amp;gt;. Tech. rep., RedHat Inc, 2003. [http://people.redhat.com/drepper/nptl-design.pdf HTML]&lt;br /&gt;
&lt;br /&gt;
[20] M. Brian Blake, &amp;lt;i&amp;gt;Coordinating Multiple Agents for Workflow-Oriented Process Orchestration&amp;lt;/i&amp;gt;. Information Systems and e-Business Management Journal, Springer-Verlag, December 2003. [http://www.cs.georgetown.edu/~blakeb/pubs/blake_ISEB2003.pdf PDF]&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=6179</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=6179"/>
		<updated>2010-12-02T04:25:38Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Who is working on what ? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group 3 Essay=&lt;br /&gt;
&lt;br /&gt;
Hello everyone, please post your contact information here:&lt;br /&gt;
&lt;br /&gt;
Ben Robson [mailto:brobson@connect.carleton.ca brobson@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Rey Arteaga: rarteaga@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Corey Faibish: [mailto:corey.faibish@gmail.com corey.faibish@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Tawfic Abdul-Fatah: [mailto:tfatah@gmail.com tfatah@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Fangchen Sun: [mailto:sfangche@connect.carleton.ca sfangche@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Mike Preston: [mailto:michaelapreston@gmail.com michaelapreston@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Wesley L. Lawrence: [mailto:wlawrenc@connect.carleton.ca wlawrenc@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Can&#039;t access the video without a login as we found out in class, but you can listen to the speech and follow with the slides pretty easily, I just went through it and it&#039;s not too bad. Rarteaga&lt;br /&gt;
&lt;br /&gt;
==Question 3 Group==&lt;br /&gt;
*Abdul-Fatah Tawfic tafatah&lt;br /&gt;
*Arteaga Reynaldo rarteaga&lt;br /&gt;
*Faibish Corey   cfaibish&lt;br /&gt;
*Lawrence Wesley wlawrenc&lt;br /&gt;
*Preston Mike    mpreston&lt;br /&gt;
*Robson  Benjamin brobson&lt;br /&gt;
*Sun     Fangchen sfangche&lt;br /&gt;
&lt;br /&gt;
==Who is working on what ?==&lt;br /&gt;
Just to keep track of who&#039;s doing what --[[User:Tafatah|Tafatah]] 01:37, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Hey everyone, I have taken the liberty of trying to provide a good first start on our paper. I have provided many resources and filled in information for all of the sections. This is not complete, but it should make the rest of the work a lot easier. Please go through and add in pieces that I am missing (specifically in the Critique section) and then we can put this essay to bed. Also, please note that below I have included my notes on the paper so that if anyone feels they do not have time to read the paper, they can read my notes instead and still find additional materials to contribute with.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:22, 20 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Man, Mike: you did a nice job! I&#039;m reading through it now very thorough :) Since you pretty much turned all of your bulleted points from the discussion page into that on the main page, what else needs to be done? Just expanding on each topic and sub-topic? Or are there untouched concepts/topics that we should be addressing?&lt;br /&gt;
Oh and question two: Should we turn the Q&amp;amp;A from the end of the video of the presentation into information for the &#039;&#039;Critique&#039;&#039; section?&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 20:34, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Mike, thnx for the great job! I basically finished the part of related work based on your draft.&lt;br /&gt;
--[[User:sfangchen|Fangchen Sun]] 17:40, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
No problem, And great additions. &lt;br /&gt;
In terms of what needs to be done, I do believe that adding some detail to the critique is where we really need some focus. Using the Q&amp;amp;A from the video is probably a great source of inspiration, maybe just take a look at the topics presented, see if additional material from other sources can be obtain and use those sources to address any pros or cons to this artical. Remember, the critique section can be agreeing or disagreeing with what is presented in the actual paper.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 15:12, 28 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I noticied we needed some work in the Critique section, so I listened to the Q&amp;amp;A session at the end of the FlexSC mp3 talk, and took some quick notes. There seems to be 3 good ones (of the 9) that I picked out. I&#039;ll summarize them and add to the Critique section, specifically questions 3, 6, and 7. If anyone else wants to have listen to a specific question, and maybe try to do some more &#039;critiquing&#039; here is a list of what time the questions each take place, and a very general statement on what the question is about, and the very general answer:&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;1 - 22:30 &amp;lt;br&amp;gt;Q: Did the paper consider Upstream patches(?) &amp;lt;br&amp;gt;A:No&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;2 - 23:00 &amp;lt;br&amp;gt;Q: Security issues with the pages &amp;lt;br&amp;gt;A:Pages pre-processor, no issue&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;3 - 24:10 &amp;lt;br&amp;gt;Q: What about blocking calls (read/write)? &amp;lt;br&amp;gt;A: Not handled&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;4 - 25:50 &amp;lt;br&amp;gt;Q: ? &amp;lt;br&amp;gt;A: Not a problem? (Personally didn&#039;t understand question, don&#039;t believe it&#039;s important, but anyone whose willing should double check)&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;5 - 28:00 &amp;lt;br&amp;gt;Q: Compare pollution between user thread switching to user-kernel thread switching? &amp;lt;br&amp;gt;A: No, only looked at and measured pollution when switching user-to-kernel.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;6 - 29:30 &amp;lt;br&amp;gt;Q: Scheduling problems of what cores are &#039;system&#039; core, and what cores are &#039;user&#039; cores &amp;lt;br&amp;gt;A: Very simple algorithm, but not tested when running multiple apps, would need to be &amp;quot;fine-tuned&amp;quot;&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;7 - 31:00 &amp;lt;br&amp;gt;Q: Situations where FlexSC is bad, when running less or equal threads to the number of cores, such as &amp;quot;Scientific programs&amp;quot;, mostly in userspace where one thread has 100% CPU resource &amp;lt;br&amp;gt;A: Agrees, FlexSC is not to be used for such situations&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;8 - 33:00 &amp;lt;br&amp;gt;Q: Problems with un-related threads demanding service, how does it scale? Issue with frequency of polling could cause sys calls to take time to preform &amp;lt;br&amp;gt;A: (Would be answered offline)&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;9 - 34:30 &amp;lt;br&amp;gt;Q: Backwards compatability and robustness &amp;lt;br&amp;gt;A: Only an issue with getTID (Thread ID), needed a small patch.&lt;br /&gt;
&lt;br /&gt;
--[[User:Wlawrence|Wesley Lawrence]] 20:31, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Wrote information in Critique for questions 3, 6 and 7 (Blocking Calls, Core Scheduling Issues, and When There Are Not More Threads Then Cores). If you feel any additions need to be made, please feel free to add them. Most importantly, I&#039;m not sure how to cite these. All information as obtained from the mp3 of the presentation, could some one let me know how to go about citing this?&lt;br /&gt;
&lt;br /&gt;
--[[User:Wlawrence|Wesley Lawrence]] 21:05, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m going to run through the whole paper, and just make sure everything makes sense, and fill in the holes where needed. I&#039;ll also add my own thoughts along the way. Feel free to do the same.-Rarteaga&lt;br /&gt;
&lt;br /&gt;
==Paper Summary==&lt;br /&gt;
I am not sure if everyone has taken the time to examine the paper closely, so I thought I would provide my notes on the paper so that anyone who has not read it could have a view of the high points.&lt;br /&gt;
&lt;br /&gt;
Abstract:&lt;br /&gt;
   - System calls are the accepted way to request services from the OS kernel, historical implementation.&lt;br /&gt;
   - System calls almost always synchronous &lt;br /&gt;
   - Aim to demonstrate how synchronous system calls negatively affect performance due mainly to pipeline flushing and pollution of key processor structures (TLB, data and instruction caches, etc.)&lt;br /&gt;
        o TLB is translation lookaside buffer which is uses pages (data and code pages) to speed up virtual translation speed.&lt;br /&gt;
   - Propose exception-less system calls to improve the current system call process.&lt;br /&gt;
        o Improve processor efficiency via enabling flexible scheduling of OS work which in turn reduces size of execution both in kernel and user space thus reducing pollution effects on processor structures.&lt;br /&gt;
   - Exception-less system calls especially effective on multi-core systems running multi-threaded applications.&lt;br /&gt;
   - FlexSC is an implementation of exception-less system calls in the Linux kernel with accompanying user-mode threads from FlexSC-Threads package.&lt;br /&gt;
        o Flex-SC-Threads convert legacy system calls into exception-less system calls.&lt;br /&gt;
Introduction:&lt;br /&gt;
   - Synchronous system calls have a negative impact on system performance due to:&lt;br /&gt;
        o Direct costs – mode switching&lt;br /&gt;
        o Indirect costs – pollution of important processor structures &lt;br /&gt;
   - Traditional system calls:&lt;br /&gt;
        o Involve writing arguments to appropriate registers as well as issuing a special machine instruction which raises a synchronous exception.&lt;br /&gt;
        o A processor exception is used to communicate with the kernel.&lt;br /&gt;
        o Synchronous execution is enforced as the application expects the completion of the system call before user-mode execution resumes.&lt;br /&gt;
   - Moore’s Law has provided large increases to performance potential of software while at the same time widening the gap between the performance of efficient and inefficient software.&lt;br /&gt;
        o This gap is mainly caused by disparity of accessing different processor resources (registers, caches, memory)&lt;br /&gt;
   - Server and system-intensive workloads are known to perform well below processor potential throughput.&lt;br /&gt;
        o These are the items the researchers are mostly interested in.&lt;br /&gt;
        o The cause is often described as due to the lack of locality.&lt;br /&gt;
        o The researchers state this lack of locality is in part a result of the current synchronous system calls.&lt;br /&gt;
   - When a synchronous system call, like pwrite, is called, the instruction per cycle level drops significantly and it takes many (in the example 14,000) cycles of execution for the instruction per cycle rate&lt;br /&gt;
 to return to the level it was at before the system (pwrite) call.&lt;br /&gt;
   - Exception-less System Call:&lt;br /&gt;
        o Request for kernel services that does not require the use of synchronous processor exceptions.&lt;br /&gt;
        o System calls are written to a reserved syscall page.&lt;br /&gt;
        o Execution of system calls is performed asynchronously by special kernel level syscall threads. The result of the execution is stored on the syscall page after execution.&lt;br /&gt;
   - By separating system call execution from system call invocation, the system can now have flexible system call scheduling.&lt;br /&gt;
        o This allows system calls to be executed in batches, increasing the temporal locality of execution.&lt;br /&gt;
        o Also provides a way to execute system calls on a separate core, in parallel to user-mode thread execution. This provides spatial per-core locality.&lt;br /&gt;
        o An additional side effect is that now a multi-core system can have individual cores designated to run either user-mode or kernel mode execution dynamically depending on the current system load.&lt;br /&gt;
   - In order to implement the exception-less system calls, the research team suggests adding a new M-on-N threading package.&lt;br /&gt;
        o M user-mode threads executing on N kernel-visible threads.&lt;br /&gt;
        o This would allow the threading package to harvest independent system calls, by switching threads, in user-mode, whenever a thread invokes a system call.&lt;br /&gt;
The (Real) Cost of System Calls&lt;br /&gt;
   - Traditional way to measure the performance cost of system calls is the mode switch time. This is the time necessary to execute the system call instruction in user-mode, resume execution in kernel mode and&lt;br /&gt;
 then return execution back to the user-mode.&lt;br /&gt;
   - Mode switch in modern processors is a processor exception.&lt;br /&gt;
        o Flush the user-mode pipeline, save registers onto the kernel stack, change the protection domain and redirect execution to the proper exception handler.&lt;br /&gt;
   - Another measure of the performance of a system call is the state pollution caused by the system call.&lt;br /&gt;
        o State pollution is the measure of how much user-mode data is overwritten in places like the TLB, cache (L1, L2, L3), branch prediction tables with kernel leel execution instructions for the system call. &lt;br /&gt;
        o This data must be re-populated upon the return to user-mode.&lt;br /&gt;
   - Potentially the most significant measure of cost of system calls is the performance impact on a running application.&lt;br /&gt;
        o Ideally, user-mode instructions per cycle should not decrease as a result of a system call.&lt;br /&gt;
        o Synchronous system calls do cause a drop in user-mode IPC  due to; direct overhead -  the processor exception associated with the system call which flushes the processor pipeline; and indirect overhead&lt;br /&gt;
 – system call pollution on processors structures.&lt;br /&gt;
Exception-less System calls:&lt;br /&gt;
   - System call batching&lt;br /&gt;
        o By delaying a series of system calls and executing them in batches you can minimize the frequency of mode switches between user and kernel mode.&lt;br /&gt;
        o Improves both the direct and indirect cost of system calls.&lt;br /&gt;
   - Core specialization&lt;br /&gt;
        o A system call can be scheduled on a different core then the core on which it was invoked, only for exception-less system calls.&lt;br /&gt;
        o Provides ability to designate a core to run all system calls.&lt;br /&gt;
   - Exception-less Syscall Interface&lt;br /&gt;
        o Set of memory pages shared between user and kernel modes. Referred to as Syscall pages.&lt;br /&gt;
        o User-space threads find a free entry in a syscall page and place a request for a system call. The user-space thread can then continue executing without interruption and must then return to the syscall&lt;br /&gt;
 page to get the return value from the system call.&lt;br /&gt;
        o Neither issuing the system call (via the syscall page) nor getting the return value generate an exception.&lt;br /&gt;
   - Syscall pages&lt;br /&gt;
        o Each page is a table of syscall entries.&lt;br /&gt;
        o Each syscall entre has a state:&lt;br /&gt;
                 Free – means a syscall can be added her&lt;br /&gt;
                 Submitted – means the kernel can proceed to invoke the appropriate system call operations.&lt;br /&gt;
                 Done – means the kernel is finished and has provided the return value to the syscall entry. User space thread must return and get the return value from the page.&lt;br /&gt;
   - Decoupling Execution from Invocation&lt;br /&gt;
        o To separate these two concepts a special kernel thread, syscall thread, is used.&lt;br /&gt;
        o Sole purpose is to pull requests from syscall pages and execute them always in kernel mode.&lt;br /&gt;
        o Syscall threads provide the ability to schedule the system calls on specific cores.&lt;br /&gt;
System Calls Galore – FlexSC-Threads&lt;br /&gt;
   - Programming for exception-less system calls requires a different and more complex way of interacting with the kernel for OS functionality.&lt;br /&gt;
        o The researchers describe working with exception-less system calls as being similar to event-driven programming in that you do not get the same sequential execution of code as you do with synchronous&lt;br /&gt;
 system calls.&lt;br /&gt;
        o In event-driven servers, the researchers suggest using a hybrid of both exception-less system calls (for performance critical paths) and regular synchronous system calls (for less critical system calls).&lt;br /&gt;
FlexSC-Threads&lt;br /&gt;
   - Threading package which transforms synchronous system calls into exception-less system calls.&lt;br /&gt;
   - Intended use is with server-type applications with which have many user-mode threads (like Apache or MySQL).&lt;br /&gt;
   - Compatible with both POSIX threads and the default Linux thread library.&lt;br /&gt;
        o As a result, multi-threaded Linux programs are immediately compatible with FlexSC threads without modification.&lt;br /&gt;
   - For multi-core systems, a single kernel level thread is created for each core of the system. Multiple user-mode threads are multiplexed onto each kernel level thread via interactions with the syscall pages.&lt;br /&gt;
        o The syscall pages are private to each kernel level thread, this means each core of a system has a syscall page from which it will receive system calls.&lt;br /&gt;
Overhead:&lt;br /&gt;
   - When running a single exception-less system call against a single synchronous system call, the exception-less call was slower.&lt;br /&gt;
   - When running a batch of exception-less system calls compared to a bunch of synchronous system calls, the exception-less system calls were much faster.&lt;br /&gt;
   - The same is true for a remote server situation, one synchronous call is much faster than one exception-less system call but a batch of exception-less system calls is faster than the same number&lt;br /&gt;
 of synchronous system calls.&lt;br /&gt;
Related Work:&lt;br /&gt;
   - System Call Batching&lt;br /&gt;
        o Operating systems have a concept called multi-calls which involves collecting multiple system calls and submitting them as a single system call.&lt;br /&gt;
        o The Cassyopia compiler has an additional process called a looped multi-call where the result of one system call can be fed as an argument to another system call in the same multi-call.&lt;br /&gt;
        o Multi-calls do not investigate parallel execution of system calls, nor do they address the blocking of system calls like exception-less system calls do.&lt;br /&gt;
                 Multi-call system calls are executed sequentially, each one must complete before the next may start.&lt;br /&gt;
   - Locality of Execution and Multicores&lt;br /&gt;
        o Other techniques include Soft Timers and Lazy Receiver Processing which try to tackle the issue of locality of execution by handling device interrupts. They both try to&lt;br /&gt;
 limit processor interference associated with interrupt handling without affecting the latency of servicing requests.&lt;br /&gt;
        o Computation Spreading is another locality process which is similar to FlexSC.&lt;br /&gt;
                 Processor modifications that allow hardware migration of threads and migration to specialized cores.&lt;br /&gt;
                 Did not model TLBs and on current hardware synchronous thread migration is a costly interprocessor interrupt.&lt;br /&gt;
        o Also have proposals for dedicating CPU cores to specific operating system functionality.&lt;br /&gt;
                 These solutions require a microkernel system.&lt;br /&gt;
                 Also FlexSC can dynamically adapt the proportion of cores used by the kernel or cores shared by user and kernel execution dynamically.&lt;br /&gt;
   - Non-blocking Execution&lt;br /&gt;
        o Past research on improving system call performance has focused on blocking versus non-blocking behaviour.&lt;br /&gt;
                 Typically researchers used threading, event-based (non-blocking) and hybrid systems to obtain high performance on server applications.&lt;br /&gt;
        o Main difference between past research and FlexSC is that none of the past proposals have decoupled system call execution from system call invocation.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 04:03, 20 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=6177</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=6177"/>
		<updated>2010-12-02T04:24:33Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Who is working on what ? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group 3 Essay=&lt;br /&gt;
&lt;br /&gt;
Hello everyone, please post your contact information here:&lt;br /&gt;
&lt;br /&gt;
Ben Robson [mailto:brobson@connect.carleton.ca brobson@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Rey Arteaga: rarteaga@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Corey Faibish: [mailto:corey.faibish@gmail.com corey.faibish@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Tawfic Abdul-Fatah: [mailto:tfatah@gmail.com tfatah@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Fangchen Sun: [mailto:sfangche@connect.carleton.ca sfangche@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Mike Preston: [mailto:michaelapreston@gmail.com michaelapreston@gmail.com]&lt;br /&gt;
&lt;br /&gt;
Wesley L. Lawrence: [mailto:wlawrenc@connect.carleton.ca wlawrenc@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
Can&#039;t access the video without a login as we found out in class, but you can listen to the speech and follow with the slides pretty easily, I just went through it and it&#039;s not too bad. Rarteaga&lt;br /&gt;
&lt;br /&gt;
==Question 3 Group==&lt;br /&gt;
*Abdul-Fatah Tawfic tafatah&lt;br /&gt;
*Arteaga Reynaldo rarteaga&lt;br /&gt;
*Faibish Corey   cfaibish&lt;br /&gt;
*Lawrence Wesley wlawrenc&lt;br /&gt;
*Preston Mike    mpreston&lt;br /&gt;
*Robson  Benjamin brobson&lt;br /&gt;
*Sun     Fangchen sfangche&lt;br /&gt;
&lt;br /&gt;
==Who is working on what ?==&lt;br /&gt;
Just to keep track of who&#039;s doing what --[[User:Tafatah|Tafatah]] 01:37, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Hey everyone, I have taken the liberty of trying to provide a good first start on our paper. I have provided many resources and filled in information for all of the sections. This is not complete, but it should make the rest of the work a lot easier. Please go through and add in pieces that I am missing (specifically in the Critique section) and then we can put this essay to bed. Also, please note that below I have included my notes on the paper so that if anyone feels they do not have time to read the paper, they can read my notes instead and still find additional materials to contribute with.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 18:22, 20 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Man, Mike: you did a nice job! I&#039;m reading through it now very thorough :) Since you pretty much turned all of your bulleted points from the discussion page into that on the main page, what else needs to be done? Just expanding on each topic and sub-topic? Or are there untouched concepts/topics that we should be addressing?&lt;br /&gt;
Oh and question two: Should we turn the Q&amp;amp;A from the end of the video of the presentation into information for the &#039;&#039;Critique&#039;&#039; section?&lt;br /&gt;
--[[User:CFaibish|CFaibish]] 20:34, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Mike, thnx for the great job! I basically finished the part of related work based on your draft.&lt;br /&gt;
--[[User:sfangchen|Fangchen Sun]] 17:40, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
No problem, And great additions. &lt;br /&gt;
In terms of what needs to be done, I do believe that adding some detail to the critique is where we really need some focus. Using the Q&amp;amp;A from the video is probably a great source of inspiration, maybe just take a look at the topics presented, see if additional material from other sources can be obtain and use those sources to address any pros or cons to this artical. Remember, the critique section can be agreeing or disagreeing with what is presented in the actual paper.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 15:12, 28 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I noticied we needed some work in the Critique section, so I listened to the Q&amp;amp;A session at the end of the FlexSC mp3 talk, and took some quick notes. There seems to be 3 good ones (of the 9) that I picked out. I&#039;ll summarize them and add to the Critique section, specifically questions 3, 6, and 7. If anyone else wants to have listen to a specific question, and maybe try to do some more &#039;critiquing&#039; here is a list of what time the questions each take place, and a very general statement on what the question is about, and the very general answer:&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;1 - 22:30 &amp;lt;br&amp;gt;Q: Did the paper consider Upstream patches(?) &amp;lt;br&amp;gt;A:No&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;2 - 23:00 &amp;lt;br&amp;gt;Q: Security issues with the pages &amp;lt;br&amp;gt;A:Pages pre-processor, no issue&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;3 - 24:10 &amp;lt;br&amp;gt;Q: What about blocking calls (read/write)? &amp;lt;br&amp;gt;A: Not handled&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;4 - 25:50 &amp;lt;br&amp;gt;Q: ? &amp;lt;br&amp;gt;A: Not a problem? (Personally didn&#039;t understand question, don&#039;t believe it&#039;s important, but anyone whose willing should double check)&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;5 - 28:00 &amp;lt;br&amp;gt;Q: Compare pollution between user thread switching to user-kernel thread switching? &amp;lt;br&amp;gt;A: No, only looked at and measured pollution when switching user-to-kernel.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;6 - 29:30 &amp;lt;br&amp;gt;Q: Scheduling problems of what cores are &#039;system&#039; core, and what cores are &#039;user&#039; cores &amp;lt;br&amp;gt;A: Very simple algorithm, but not tested when running multiple apps, would need to be &amp;quot;fine-tuned&amp;quot;&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;7 - 31:00 &amp;lt;br&amp;gt;Q: Situations where FlexSC is bad, when running less or equal threads to the number of cores, such as &amp;quot;Scientific programs&amp;quot;, mostly in userspace where one thread has 100% CPU resource &amp;lt;br&amp;gt;A: Agrees, FlexSC is not to be used for such situations&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;8 - 33:00 &amp;lt;br&amp;gt;Q: Problems with un-related threads demanding service, how does it scale? Issue with frequency of polling could cause sys calls to take time to preform &amp;lt;br&amp;gt;A: (Would be answered offline)&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;9 - 34:30 &amp;lt;br&amp;gt;Q: Backwards compatability and robustness &amp;lt;br&amp;gt;A: Only an issue with getTID (Thread ID), needed a small patch.&lt;br /&gt;
&lt;br /&gt;
--[[User:Wlawrence|Wesley Lawrence]] 20:31, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m going to run through the whole paper, and just make sure everything makes sense, and fill in the holes where needed. I&#039;ll also add my own thoughts along the way. Feel free to do the same.-Rarteaga&lt;br /&gt;
&lt;br /&gt;
Wrote information in Critique for questions 3, 6 and 7 (Blocking Calls, Core Scheduling Issues, and When There Are Not More Threads Then Cores). If you feel any additions need to be made, please feel free to add them. Most importantly, I&#039;m not sure how to cite these. All information as obtained from the mp3 of the presentation, could some one let me know how to go about citing this?&lt;br /&gt;
&lt;br /&gt;
--[[User:Wlawrence|Wesley Lawrence]] 21:05, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Paper Summary==&lt;br /&gt;
I am not sure if everyone has taken the time to examine the paper closely, so I thought I would provide my notes on the paper so that anyone who has not read it could have a view of the high points.&lt;br /&gt;
&lt;br /&gt;
Abstract:&lt;br /&gt;
   - System calls are the accepted way to request services from the OS kernel, historical implementation.&lt;br /&gt;
   - System calls almost always synchronous &lt;br /&gt;
   - Aim to demonstrate how synchronous system calls negatively affect performance due mainly to pipeline flushing and pollution of key processor structures (TLB, data and instruction caches, etc.)&lt;br /&gt;
        o TLB is translation lookaside buffer which is uses pages (data and code pages) to speed up virtual translation speed.&lt;br /&gt;
   - Propose exception-less system calls to improve the current system call process.&lt;br /&gt;
        o Improve processor efficiency via enabling flexible scheduling of OS work which in turn reduces size of execution both in kernel and user space thus reducing pollution effects on processor structures.&lt;br /&gt;
   - Exception-less system calls especially effective on multi-core systems running multi-threaded applications.&lt;br /&gt;
   - FlexSC is an implementation of exception-less system calls in the Linux kernel with accompanying user-mode threads from FlexSC-Threads package.&lt;br /&gt;
        o Flex-SC-Threads convert legacy system calls into exception-less system calls.&lt;br /&gt;
Introduction:&lt;br /&gt;
   - Synchronous system calls have a negative impact on system performance due to:&lt;br /&gt;
        o Direct costs – mode switching&lt;br /&gt;
        o Indirect costs – pollution of important processor structures &lt;br /&gt;
   - Traditional system calls:&lt;br /&gt;
        o Involve writing arguments to appropriate registers as well as issuing a special machine instruction which raises a synchronous exception.&lt;br /&gt;
        o A processor exception is used to communicate with the kernel.&lt;br /&gt;
        o Synchronous execution is enforced as the application expects the completion of the system call before user-mode execution resumes.&lt;br /&gt;
   - Moore’s Law has provided large increases to performance potential of software while at the same time widening the gap between the performance of efficient and inefficient software.&lt;br /&gt;
        o This gap is mainly caused by disparity of accessing different processor resources (registers, caches, memory)&lt;br /&gt;
   - Server and system-intensive workloads are known to perform well below processor potential throughput.&lt;br /&gt;
        o These are the items the researchers are mostly interested in.&lt;br /&gt;
        o The cause is often described as due to the lack of locality.&lt;br /&gt;
        o The researchers state this lack of locality is in part a result of the current synchronous system calls.&lt;br /&gt;
   - When a synchronous system call, like pwrite, is called, the instruction per cycle level drops significantly and it takes many (in the example 14,000) cycles of execution for the instruction per cycle rate&lt;br /&gt;
 to return to the level it was at before the system (pwrite) call.&lt;br /&gt;
   - Exception-less System Call:&lt;br /&gt;
        o Request for kernel services that does not require the use of synchronous processor exceptions.&lt;br /&gt;
        o System calls are written to a reserved syscall page.&lt;br /&gt;
        o Execution of system calls is performed asynchronously by special kernel level syscall threads. The result of the execution is stored on the syscall page after execution.&lt;br /&gt;
   - By separating system call execution from system call invocation, the system can now have flexible system call scheduling.&lt;br /&gt;
        o This allows system calls to be executed in batches, increasing the temporal locality of execution.&lt;br /&gt;
        o Also provides a way to execute system calls on a separate core, in parallel to user-mode thread execution. This provides spatial per-core locality.&lt;br /&gt;
        o An additional side effect is that now a multi-core system can have individual cores designated to run either user-mode or kernel mode execution dynamically depending on the current system load.&lt;br /&gt;
   - In order to implement the exception-less system calls, the research team suggests adding a new M-on-N threading package.&lt;br /&gt;
        o M user-mode threads executing on N kernel-visible threads.&lt;br /&gt;
        o This would allow the threading package to harvest independent system calls, by switching threads, in user-mode, whenever a thread invokes a system call.&lt;br /&gt;
The (Real) Cost of System Calls&lt;br /&gt;
   - Traditional way to measure the performance cost of system calls is the mode switch time. This is the time necessary to execute the system call instruction in user-mode, resume execution in kernel mode and&lt;br /&gt;
 then return execution back to the user-mode.&lt;br /&gt;
   - Mode switch in modern processors is a processor exception.&lt;br /&gt;
        o Flush the user-mode pipeline, save registers onto the kernel stack, change the protection domain and redirect execution to the proper exception handler.&lt;br /&gt;
   - Another measure of the performance of a system call is the state pollution caused by the system call.&lt;br /&gt;
        o State pollution is the measure of how much user-mode data is overwritten in places like the TLB, cache (L1, L2, L3), branch prediction tables with kernel leel execution instructions for the system call. &lt;br /&gt;
        o This data must be re-populated upon the return to user-mode.&lt;br /&gt;
   - Potentially the most significant measure of cost of system calls is the performance impact on a running application.&lt;br /&gt;
        o Ideally, user-mode instructions per cycle should not decrease as a result of a system call.&lt;br /&gt;
        o Synchronous system calls do cause a drop in user-mode IPC  due to; direct overhead -  the processor exception associated with the system call which flushes the processor pipeline; and indirect overhead&lt;br /&gt;
 – system call pollution on processors structures.&lt;br /&gt;
Exception-less System calls:&lt;br /&gt;
   - System call batching&lt;br /&gt;
        o By delaying a series of system calls and executing them in batches you can minimize the frequency of mode switches between user and kernel mode.&lt;br /&gt;
        o Improves both the direct and indirect cost of system calls.&lt;br /&gt;
   - Core specialization&lt;br /&gt;
        o A system call can be scheduled on a different core then the core on which it was invoked, only for exception-less system calls.&lt;br /&gt;
        o Provides ability to designate a core to run all system calls.&lt;br /&gt;
   - Exception-less Syscall Interface&lt;br /&gt;
        o Set of memory pages shared between user and kernel modes. Referred to as Syscall pages.&lt;br /&gt;
        o User-space threads find a free entry in a syscall page and place a request for a system call. The user-space thread can then continue executing without interruption and must then return to the syscall&lt;br /&gt;
 page to get the return value from the system call.&lt;br /&gt;
        o Neither issuing the system call (via the syscall page) nor getting the return value generate an exception.&lt;br /&gt;
   - Syscall pages&lt;br /&gt;
        o Each page is a table of syscall entries.&lt;br /&gt;
        o Each syscall entre has a state:&lt;br /&gt;
                 Free – means a syscall can be added her&lt;br /&gt;
                 Submitted – means the kernel can proceed to invoke the appropriate system call operations.&lt;br /&gt;
                 Done – means the kernel is finished and has provided the return value to the syscall entry. User space thread must return and get the return value from the page.&lt;br /&gt;
   - Decoupling Execution from Invocation&lt;br /&gt;
        o To separate these two concepts a special kernel thread, syscall thread, is used.&lt;br /&gt;
        o Sole purpose is to pull requests from syscall pages and execute them always in kernel mode.&lt;br /&gt;
        o Syscall threads provide the ability to schedule the system calls on specific cores.&lt;br /&gt;
System Calls Galore – FlexSC-Threads&lt;br /&gt;
   - Programming for exception-less system calls requires a different and more complex way of interacting with the kernel for OS functionality.&lt;br /&gt;
        o The researchers describe working with exception-less system calls as being similar to event-driven programming in that you do not get the same sequential execution of code as you do with synchronous&lt;br /&gt;
 system calls.&lt;br /&gt;
        o In event-driven servers, the researchers suggest using a hybrid of both exception-less system calls (for performance critical paths) and regular synchronous system calls (for less critical system calls).&lt;br /&gt;
FlexSC-Threads&lt;br /&gt;
   - Threading package which transforms synchronous system calls into exception-less system calls.&lt;br /&gt;
   - Intended use is with server-type applications with which have many user-mode threads (like Apache or MySQL).&lt;br /&gt;
   - Compatible with both POSIX threads and the default Linux thread library.&lt;br /&gt;
        o As a result, multi-threaded Linux programs are immediately compatible with FlexSC threads without modification.&lt;br /&gt;
   - For multi-core systems, a single kernel level thread is created for each core of the system. Multiple user-mode threads are multiplexed onto each kernel level thread via interactions with the syscall pages.&lt;br /&gt;
        o The syscall pages are private to each kernel level thread, this means each core of a system has a syscall page from which it will receive system calls.&lt;br /&gt;
Overhead:&lt;br /&gt;
   - When running a single exception-less system call against a single synchronous system call, the exception-less call was slower.&lt;br /&gt;
   - When running a batch of exception-less system calls compared to a bunch of synchronous system calls, the exception-less system calls were much faster.&lt;br /&gt;
   - The same is true for a remote server situation, one synchronous call is much faster than one exception-less system call but a batch of exception-less system calls is faster than the same number&lt;br /&gt;
 of synchronous system calls.&lt;br /&gt;
Related Work:&lt;br /&gt;
   - System Call Batching&lt;br /&gt;
        o Operating systems have a concept called multi-calls which involves collecting multiple system calls and submitting them as a single system call.&lt;br /&gt;
        o The Cassyopia compiler has an additional process called a looped multi-call where the result of one system call can be fed as an argument to another system call in the same multi-call.&lt;br /&gt;
        o Multi-calls do not investigate parallel execution of system calls, nor do they address the blocking of system calls like exception-less system calls do.&lt;br /&gt;
                 Multi-call system calls are executed sequentially, each one must complete before the next may start.&lt;br /&gt;
   - Locality of Execution and Multicores&lt;br /&gt;
        o Other techniques include Soft Timers and Lazy Receiver Processing which try to tackle the issue of locality of execution by handling device interrupts. They both try to&lt;br /&gt;
 limit processor interference associated with interrupt handling without affecting the latency of servicing requests.&lt;br /&gt;
        o Computation Spreading is another locality process which is similar to FlexSC.&lt;br /&gt;
                 Processor modifications that allow hardware migration of threads and migration to specialized cores.&lt;br /&gt;
                 Did not model TLBs and on current hardware synchronous thread migration is a costly interprocessor interrupt.&lt;br /&gt;
        o Also have proposals for dedicating CPU cores to specific operating system functionality.&lt;br /&gt;
                 These solutions require a microkernel system.&lt;br /&gt;
                 Also FlexSC can dynamically adapt the proportion of cores used by the kernel or cores shared by user and kernel execution dynamically.&lt;br /&gt;
   - Non-blocking Execution&lt;br /&gt;
        o Past research on improving system call performance has focused on blocking versus non-blocking behaviour.&lt;br /&gt;
                 Typically researchers used threading, event-based (non-blocking) and hybrid systems to obtain high performance on server applications.&lt;br /&gt;
        o Main difference between past research and FlexSC is that none of the past proposals have decoupled system call execution from system call invocation.&lt;br /&gt;
--[[User:Mike Preston|Mike Preston]] 04:03, 20 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4930</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4930"/>
		<updated>2010-11-11T19:03:29Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Background Concepts: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay.&lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
In order to fully understand the FlexSC paper, it is essential to understand the key concepts that are discussed within the paper. Here listed below, are the main concepts required to fully comprehend the paper. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A System Call is the gateway between the User Space and the Kernel Space. The User Space is not given direct access to the Kernel&#039;s services, for several reasons (one being security), hence System calls are the messengers between the User and Kernel Space. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Processor Exceptions:**&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Synchronous Execution Model(System call Interface) refers to the structure in which system calls specifically are managed in a serialized manner. Moreover, the synchronous model completes one system call at a time, and does not move onto the next system call until the previous system call is finished executing. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Mode Switches speaks of moving from one medium to another. Specifically moving from the User Space mode to the Kernel mode or Kernel mode to User Space. It does not matter which direction or which modes we are swtiching from, this is simply a general term.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
System Call Pollution is a more sophisticated manner of speaking of wasteful or un-necessary delay in the system caused by system calls. This pollution is in direct correlation with the fact that the system call invokes a mode swtich which is not a costless task.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
System Call Batching is the concept  of batching (i.e. grouping) system calls together. This idea is very similar to the counterpart of system calls in a sequential sequence. Groups are formed of system calls after the idea of batching has occurred instead of the initial individual system calls in a sequence.&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Critique References: ==&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4929</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4929"/>
		<updated>2010-11-11T19:02:50Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Background Concepts: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay.&lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
In order to fully understand the FlexSC paper, it is essential to understand the key concepts that are discussed within the paper. Here listed below, are the main concepts required to fully comprehend the paper. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A System Call is the gateway between the User Space and the Kernel Space. The User Space is not given direct access to the Kernel&#039;s services, for several reasons (one being security), hence System calls are the messengers between the User and Kernel Space. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
**Processor Exceptions:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Synchronous Execution Model(System call Interface) refers to the structure in which system calls specifically are managed in a serialized manner. Moreover, the synchronous model completes one system call at a time, and does not move onto the next system call until the previous system call is finished executing. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Mode Switches speaks of moving from one medium to another. Specifically moving from the User Space mode to the Kernel mode or Kernel mode to User Space. It does not matter which direction or which modes we are swtiching from, this is simply a general term.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
System Call Pollution is a more sophisticated manner of speaking of wasteful or un-necessary delay in the system caused by system calls. This pollution is in direct correlation with the fact that the system call invokes a mode swtich which is not a costless task.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
System Call Batching is the concept  of batching (i.e. grouping) system calls together. This idea is very similar to the counterpart of system calls in a sequential sequence. Groups are formed of system calls after the idea of batching has occurred instead of the initial individual system calls in a sequence.&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Critique References: ==&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4928</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4928"/>
		<updated>2010-11-11T18:50:46Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Background Concepts: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay.&lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
In order to fully understand the FlexSC paper, it is essential to understand the key concepts that are discussed within the paper. Here listed below, are the main concepts required to fully comprehend the paper. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A System Call is the gateway between the User Space and the Kernel Space. The User Space is not given direct access to the Kernel&#039;s services, for several reasons (one being security), hence System calls are the messengers between the User and Kernel Space. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Processor Exceptions:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Synchronous Execution Model(System call Interface) refers to the structure in which system calls specifically are managed in a serialized manner. Moreover, the synchronous model completes one system call at a time, and does not move onto the next system call until the previous system call is finished executing. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
System Call Pollution:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Mode Switches speaks of moving from one medium to another. Specifically moving from the User Space mode to the Kernel mode or Kernel mode to User Space. It does not matter which direction or which modes we are swtiching from, this is simply a general term.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
System Call Batching is the concept  of batching (i.e. grouping) system calls together. This idea is very similar to the counterpart of system calls in a sequential sequence. Groups are formed of system calls after the idea of batching has occurred instead of the initial individual system calls in a sequence.&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Critique References: ==&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4927</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4927"/>
		<updated>2010-11-11T18:47:02Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Background Concepts: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay.&lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
In order to fully understand the FlexSC paper, it is essential to understand the key concepts that are discussed within the paper. Here listed below, are the main concepts required to fully comprehend the paper. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A System Call is the gateway between the User Space and the Kernel Space. The User Space is not given direct access to the Kernel&#039;s services, for several reasons (one being security), hence System calls are the messengers between the User and Kernel Space. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Processor Exceptions:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Synchronous Execution Model(System call Interface) refers to the structure in which system calls specifically are managed in a serialized manner. Moreover, the synchronous model completes one system call at a time, and does not move onto the next system call until the previous system call is finished executing. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
System Call Pollution:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Mode Switches:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
System Call Batching is the concept  of batching (i.e. grouping) system calls together. This idea is very similar to the counterpart of system calls in a sequential sequence. Groups are formed of system calls after the idea of batching has occurred instead of the initial individual system calls in a sequence.&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Critique References: ==&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4926</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4926"/>
		<updated>2010-11-11T18:46:22Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Background Concepts: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay.&lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
In order to fully understand the FlexSC paper, it is essential to understand the key concepts that are discussed within the paper. Here listed below, are the main concepts required to fully comprehend the paper. &lt;br /&gt;
&lt;br /&gt;
A System Call is the gateway between the User Space and the Kernel Space. The User Space is not given direct access to the Kernel&#039;s services, for several reasons (one being security), hence System calls are the messengers between the User and Kernel Space. &lt;br /&gt;
&lt;br /&gt;
Processor Exceptions:&lt;br /&gt;
&lt;br /&gt;
Synchronous Execution Model(System call Interface) refers to the structure in which system calls specifically are managed in a serialized manner. Moreover, the synchronous model completes one system call at a time, and does not move onto the next system call until the previous system call is finished executing. &lt;br /&gt;
&lt;br /&gt;
System Call Pollution:&lt;br /&gt;
&lt;br /&gt;
Mode Switches:&lt;br /&gt;
&lt;br /&gt;
System Call Batching is the concept  of batching (i.e.)grouping) system calls together. This idea is very similar to the counterpart of system calls in a sequential sequence. Groups are formed of system calls after the idea of batching has occurred instead of the initial individual system calls in a sequence.&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Critique References: ==&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4923</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4923"/>
		<updated>2010-11-11T18:34:06Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Background Concepts: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay.&lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
In order to fully understand the FlexSC paper, it is essential to understand the key concepts that are discussed within the paper. Here listed below, are the main concepts required to fully comprehend the paper. &lt;br /&gt;
&lt;br /&gt;
A System Call is the gateway between the User Space and the Kernel Space. The User Space is not given direct access to the Kernel&#039;s services, for several reasons (one being security), hence System calls are the messengers between the User and Kernel Space. &lt;br /&gt;
&lt;br /&gt;
Processor Exceptions:&lt;br /&gt;
&lt;br /&gt;
Synchronous Execution Model(System call Interface):&lt;br /&gt;
&lt;br /&gt;
System Call Pollution:&lt;br /&gt;
&lt;br /&gt;
Mode Switches:&lt;br /&gt;
&lt;br /&gt;
System Call Batching is the concept  of batching (i.e.)grouping) system calls together. This idea is very similar to the counterpart of system calls in a sequential sequence. Groups are formed of system calls after the idea of batching has occurred instead of the initial individual system calls in a sequence.&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Critique References: ==&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4922</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4922"/>
		<updated>2010-11-11T18:19:16Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay.&lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
In order to fully understand the FlexSC paper, it is essential to understand the key concepts that are discussed within the paper. Here listed below, are the main concepts required to fully comprehend the paper. &lt;br /&gt;
&lt;br /&gt;
A System Call is the gateway between the User Space and the Kernel Space. The User Space is not given direct access to the Kernel&#039;s services, for several reasons (one being security), hence System calls are the messengers between the User and Kernel Space. &lt;br /&gt;
&lt;br /&gt;
Processor Exceptions:&lt;br /&gt;
&lt;br /&gt;
Synchronous Execution Model(System call Interface):&lt;br /&gt;
&lt;br /&gt;
System Call Pollution:&lt;br /&gt;
&lt;br /&gt;
Mode Switches:&lt;br /&gt;
&lt;br /&gt;
System Call Batching:&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Critique References: ==&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4921</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4921"/>
		<updated>2010-11-11T18:11:37Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Background Concepts: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay.&lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
System Calls:&lt;br /&gt;
&lt;br /&gt;
Processor Exceptions:&lt;br /&gt;
&lt;br /&gt;
Synchronous Execution Model(System call Interface):&lt;br /&gt;
&lt;br /&gt;
System Call Pollution:&lt;br /&gt;
&lt;br /&gt;
Mode Switches:&lt;br /&gt;
&lt;br /&gt;
System Call Batching:&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Critique References: ==&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4920</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4920"/>
		<updated>2010-11-11T17:54:41Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is named &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper consist of Livio Stores and Michael Stumm, both of which are from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics of the essay.&lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Critique References: ==&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4919</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4919"/>
		<updated>2010-11-11T17:52:52Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper are Livio Stores and Michael Stumm, both are which from the University of Toronto. The paper can be viewed here, [http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf] for further details on specifics. &lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Critique References: ==&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4918</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4918"/>
		<updated>2010-11-11T17:51:26Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper ==&lt;br /&gt;
The Title of the paper we will be analyzing is &amp;quot;FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&amp;quot;. The authors of this paper are Livio Stores and Michael Stumm, both are which from the University of Toronto. The paper can be viewed here, [[Media:http://www.usenix.org/events/osdi10/tech/full_papers/Soares.pdf]] for further details on specifics. &lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Critique References: ==&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4917</id>
		<title>COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_3&amp;diff=4917"/>
		<updated>2010-11-11T17:45:47Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;3.FlexSC: Flexible System Call Scheduling with Exception-Less System Calls&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Paper: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Background Concepts: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Research Problem: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Contribution: ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Critique References: ==&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=4916</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=4916"/>
		<updated>2010-11-11T17:39:54Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Group 3 Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group 3 Essay=&lt;br /&gt;
&lt;br /&gt;
Hello everyone, please post your contact information here:&lt;br /&gt;
&lt;br /&gt;
Ben Robson [mailto:brobson@connect.carleton.ca brobson@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Rey Arteaga: rarteaga@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
Can&#039;t access the video without a login as we found out in class, but you can listen to the speech and follow with the slides pretty easily, I just went through it and it&#039;s not too bad. Rarteaga&lt;br /&gt;
&lt;br /&gt;
==Question 3 Group==&lt;br /&gt;
*Abdul-Fatah Tawfic tafatah&lt;br /&gt;
*Arteaga Reynaldo rarteaga&lt;br /&gt;
*Faibish Corey   cfaibish&lt;br /&gt;
*Lawrence Wesley wlawrenc&lt;br /&gt;
*Preston Mike    mpreston&lt;br /&gt;
*Robson  Benjamin brobson&lt;br /&gt;
*Sun     Fangchen sfangche&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=4915</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=4915"/>
		<updated>2010-11-11T17:35:51Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Group 3 Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group 3 Essay=&lt;br /&gt;
&lt;br /&gt;
Hello everyone, please post your contact information here:&lt;br /&gt;
&lt;br /&gt;
Ben Robson [mailto:brobson@connect.carleton.ca brobson@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Rey Arteaga: rarteaga@connect.carleton.ca&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Question 3 Group==&lt;br /&gt;
*Abdul-Fatah Tawfic tafatah&lt;br /&gt;
*Arteaga Reynaldo rarteaga&lt;br /&gt;
*Faibish Corey   cfaibish&lt;br /&gt;
*Lawrence Wesley wlawrenc&lt;br /&gt;
*Preston Mike    mpreston&lt;br /&gt;
*Robson  Benjamin brobson&lt;br /&gt;
*Sun     Fangchen sfangche&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=4914</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=4914"/>
		<updated>2010-11-11T17:35:24Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Group 3 Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group 3 Essay=&lt;br /&gt;
&lt;br /&gt;
Hello everyone, please post your contact information here:&lt;br /&gt;
&lt;br /&gt;
Ben Robson [mailto:brobson@connect.carleton.ca brobson@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
Rey Arteaga [rarteaga@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Question 3 Group==&lt;br /&gt;
*Abdul-Fatah Tawfic tafatah&lt;br /&gt;
*Arteaga Reynaldo rarteaga&lt;br /&gt;
*Faibish Corey   cfaibish&lt;br /&gt;
*Lawrence Wesley wlawrenc&lt;br /&gt;
*Preston Mike    mpreston&lt;br /&gt;
*Robson  Benjamin brobson&lt;br /&gt;
*Sun     Fangchen sfangche&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=4913</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 3</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_3&amp;diff=4913"/>
		<updated>2010-11-11T17:35:08Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Group 3 Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group 3 Essay=&lt;br /&gt;
&lt;br /&gt;
Hello everyone, please post your contact information here:&lt;br /&gt;
&lt;br /&gt;
Ben Robson [mailto:brobson@connect.carleton.ca brobson@connect.carleton.ca]&lt;br /&gt;
Rey Arteaga [rarteaga@connect.carleton.ca]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Question 3 Group==&lt;br /&gt;
*Abdul-Fatah Tawfic tafatah&lt;br /&gt;
*Arteaga Reynaldo rarteaga&lt;br /&gt;
*Faibish Corey   cfaibish&lt;br /&gt;
*Lawrence Wesley wlawrenc&lt;br /&gt;
*Preston Mike    mpreston&lt;br /&gt;
*Robson  Benjamin brobson&lt;br /&gt;
*Sun     Fangchen sfangche&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4713</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4713"/>
		<updated>2010-10-15T11:05:11Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Miscellaneous System Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are &#039;&#039;read&#039;&#039;, &#039;&#039;write&#039;&#039;, &#039;&#039;seek&#039;&#039; and &#039;&#039;stat&#039;&#039;. These were also part of the first UNIX built. The &#039;&#039;read&#039;&#039; and &#039;&#039;write&#039;&#039; system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The &#039;&#039;seek&#039;&#039; system calls is used to go to a specified position in a file. This calls used a 16 bit address to determine its position in a file(also called offset). But this was replaced very quickly for &#039;&#039;lseek&#039;&#039; as early as SVR4. It allows the call to use 32 bit address enabling the users more flexibility when reading or writing files especially for large ones. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement &#039;&#039;lseek64&#039;&#039;, a system call that will use 64 bit addresses. The &#039;&#039;stat&#039;&#039; system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: &#039;&#039;fstat&#039;&#039; and &#039;&#039;lstat&#039;&#039;. They both do the same thing except &#039;&#039;lstat&#039;&#039; give the status of symbolic links and &#039;&#039;fstat&#039;&#039; give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called &#039;&#039;statvfs&#039;&#039; and &#039;&#039;fstatvfs&#039;&#039; were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has &#039;&#039;statfs&#039;&#039; and &#039;&#039;fstatfs&#039;&#039; to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is &#039;&#039;mount&#039;&#039; and &#039;&#039;umount&#039;&#039;. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using the &#039;&#039;clone&#039;&#039; system call with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The &#039;&#039;umount&#039;&#039; system call unmounted the file system from the storage device. The only noteworthy change to &#039;&#039;umount&#039;&#039; was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as &#039;&#039;umount&#039;&#039; except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;open&#039;&#039;, &#039;&#039;read&#039;&#039; and &#039;&#039;write&#039;&#039; calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call &#039;&#039;mmap&#039;&#039;. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, &#039;&#039;ioctl&#039;&#039; system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; which spawns a new process, and &#039;stty&#039; which sets the mode of typewriter.The &#039;wait&#039; system call is used in both as well the only really difference is that in the Linux version wait store status information in a integer which take the integer itself as an argument, not a pointer to itself. In Linux there are a lot more system calls regarding this type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification. The &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface to getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe. A specially named pipe is FIFO. Standing for First In First Out. It is accessed as part of a file system through idea of pipes. Reflecting its name, it processes in a first come first serve basis. It deals with a process and once it is done with it completely, it moves onto the next process.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: These functions all consist of recieving and sending messages from the queue usually involving ID&#039;s. &#039;&#039;msgget()&#039;&#039; acquires a message from the queue identifier relating to the key. Closely related, but not the same the &#039;&#039;msgrcv()&#039;&#039; command is used to recieve a message from the queue relating to the msqid parameter. This parameter involves the ID of where to recieve the message. &#039;&#039;msgsnd()&#039;&#039; sends a message to the queue. This command can be thought of as the reverse of the &#039;&#039;msgget()&#039;&#039;. Lastly, &#039;&#039;msgctl()&#039;&#039; performs message control operations through queries.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Semaphores: This idea of semaphores consists of setting it or checking it. They are used to control access to files. One can use the concept of file locking to get a better understanding of Semaphores. Semaphores aren&#039;t usually held together in singles, but rather in groups. This is done by creating a set that can contain several semaphores through the &#039;&#039;semget()&#039;&#039; command. &#039;&#039; semop()&#039;&#039; decides what we the semaphor to accomplish. I.e) depending on whether we have a positive, 0 or negative value, the semaphor shall be added, will wait, or be blocked until positive, respectively. Semaphores were first thought of by Dijkstra and used in computers in the late 60&#039;s&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Shared Memory: Functions involving shared memory allow the user to be able to access, detach and combine shared addresses. &#039;&#039;shmget()&#039;&#039; command returns the ID for the shared memory region. It can also create it if it doesn&#039;t already exist. &#039;&#039;shmat()&#039;&#039; function attaches the  shared  memory  to the virtual address of the calling process. &#039;&#039;shmdt()&#039;&#039; reverses the &#039;&#039;shmat()&#039;&#039; command, and detaches shared memory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unix and Linux use the same calls for the majority of the functions now, except for a few which are slighly different.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
This category of system calls contains the system calls that do not enough similar calls to form its own group. To avoid random calls floating around, we simply group them into this category. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Directories: These are special files that contain a number of filenames. There are different variations of directories, i.e.) System V, Berkely style directories. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Time: Intuitively, this call allows the user to access the time of day. Specifics on time can be obtained through the structure given by these attributes: &#039;&#039;tm_secint&#039;&#039;, &#039;&#039;tm_min&#039;&#039;, &#039;&#039;tm_hour&#039;&#039;, &#039;&#039;tm_mday&#039;&#039;, &#039;&#039;tm_mon&#039;&#039;, &#039;&#039; tm_year&#039;&#039;, just to list a few.&lt;br /&gt;
Parsing Input: Parsing is often used when the user enters in data and the program must parse this data into appropriate divisions in order to obtain specific parts of the data. I.e.)Seperate words from each other in the program, seperate numbers from characters. There are several different ways a programmer can parse the data in order to achieve specific pieces of the data that are needed to be analyzed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Lastly, there are some system calls which overlap and can be considered in a specific category or mentioned within the Miscellaneous System Calls. Referring to the ``3rd edition, Modern Operating Systems`` textbook, the command, ``chmod`` described above in File Management Calls is considered Miscellaneous. Similarly, the kill() command is mentioned as a Miscellaneous System Call. Hence, it is difficult to decifer whether a system call can be placed into a specific category or simply placed in the ``Other`` bin.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Most of the older OS like Unix, had majority of system calls, but the newer(or modern) OS have more to offer in terms of quantity of system calls. With changes to the syntax a fair bit, it`s not too difficult to transfer over. This is in result of the programmers desiring a portable set of original system calls. Although not all are portable, they try to keep most portable such that less work is involved trying to go from one system to a newer system.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;br /&gt;
&lt;br /&gt;
Tanenbaum, Andrew S. 3rd edition Modern Operating Systems.&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_2&amp;diff=4711</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_2&amp;diff=4711"/>
		<updated>2010-10-15T10:59:47Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Not in this group and I&#039;m not completely sure if this is relevant but I found that UNIX used the POSIX standard while Linux used LSB which is based on the POSIX standard. &lt;br /&gt;
This article outlines some conflicts between them [https://www.opengroup.org/platform/single_unix_specification/uploads/40/13450/POSIX_and_Linux_Application_Compatibility_Final_-_v1.0.pdf]. I didn&#039;t find the actual comparisons very comprehensible but the ideas are there. --[[User:Slay|Slay]] 15:05, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Uh, where did Figure 1 and much of the current text come from?  It looks like it was cut and pasted from random source.  Please don&#039;t plagarize!  --[[User:Soma|Anil]] (19:24, 8 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
Look into the reference article &amp;quot;Kernel command using Linux system calls&amp;quot;. Plagiarism is not my goal. I&#039;m using my own words to make a simple but complete description of a system call using the interrupt method. Check the references and If you think it is too close, please let me know. It is hard when an author makes such a good and clear description.--[[User:Sblais2|Sblais2]] 21:02, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I thought it would be nice to first describe what is a system calls and the two current methods of doing them. The first is the interrupt method. The second which is used in Linux 2.6.18+ is using the sysenter and sysexit instructions.--[[User:Sblais2|Sblais2]] 19:56, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
You can&#039;t use that figure.  And you can&#039;t copy the text either, even if you change the words slightly.  But really, you&#039;re just wasting your time.  This question is not talking about how system calls are invoked; if you wanted to discuss this, you should be discussing system call invocation mechanisms on the PDP-11 and VAX systems!  Here I&#039;m interested in what are the calls, i.e., kernel functions that can be invoked by a regular program.--[[User:Soma|Anil]]&lt;br /&gt;
&lt;br /&gt;
This link provides about 40 UNIX system calls along with example on where they would be used from the looks of it: [http://www.di.uevora.pt/~lmr/syscalls.html]. --[[User:Apharan2|Apharan2]]&lt;br /&gt;
&lt;br /&gt;
Thank you for clarifying things. I will go that route. --[[User:Sblais2|Sblais2]] 13:08, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I don&#039;t see everyone contributing to this group. Please do let the others in your group know, divide your work into sections and discuss here. If you have questions - ask.--[[User:praman|Preeti]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This link shows all the system calls from Linux 2.6.33 [http://www.kernel.org/doc/man-pages/online/pages/man2/syscalls.2.html]&lt;br /&gt;
--[[User:Sblais2|Sblais2]] 23:36, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
As no one in our group made suggestion on the format of our essay, I&#039;ve put one in place. In your research, each system calls should fit in one of the category. If someone picks one up. Please let me know ASAP. I will be working on that all day. Read the intro&#039;s last paragraph if your not sure what you should write on. My english writing skills are not perfect so if one of you guys see ways to improved the text, please do. --[[User:Sblais2|Sblais2]] 14:29, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Well I feel like I&#039;m the only one in that team but...Anyway I&#039;ve completed the first 2 sections. Please try working on the next 4.  If you want to modify something, please post a small gist of it in here so we can all validate. Thanks. --[[User:Sblais2|Sblais2]] 20:10, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I am wondering if we have actually split up the work accordingly i am going to a temped to answer Information Maintenance if anyone has dibs please let me know - Csulliva&lt;br /&gt;
I am finding this site that I find very helpfull to understand system called for linux an UNIX &lt;br /&gt;
http://ss64.com/bash/&lt;br /&gt;
-Csulliva&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ll do process control calls. and help out on the last part that is not written up yet.I&#039;ll read the other parts as well just to get an understanding. [[User:Apharan2|Apharan2]]&lt;br /&gt;
&lt;br /&gt;
Csullliva, be careful not to confuse system calls and shell commands. Some of them have actually the same name, like &#039;&#039;mkdir&#039;&#039;. But shell commands is on the user-level. Some of them will actually do system calls to complete the operation.&lt;br /&gt;
It&#039;s good to finally hear from you guys.--[[User:Sblais2|Sblais2]] 01:39, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;ll work on the communication calls and the miscellaneous system calls. Not sure if you guys wanted to add more to this, but I can help out with writing a conclusion as well. I&#039;m somewhat good at writing, so if i see any little things that I could touch up on, i&#039;ll help out with that.-R.arteaga&lt;br /&gt;
&lt;br /&gt;
after my confusion with bash and system calls i had some trouble find a  system calls for linux that would effect time here a web site I found that helped me out with description regarding the calls http://www.digilife.be/quickreferences/QRC/LINUX%20System%20Call%20Quick%20Reference.pdf&lt;br /&gt;
hopefully i am on the write track now....some one stop me if i am not -Csulliva&lt;br /&gt;
&lt;br /&gt;
Looks ok to me Csulliva, might want to check the link I posted in this discussion. It shows all the system calls in the Linux kernel 2.6.30. Then even show history information in it. It is then easy to track early Unix implementation. R.arteaga -&amp;gt; That would be great. I started thinking about a conclusion but writing is not my forte (unless I am underestimating myself). --[[User:Sblais2|Sblais2]] 11:59, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Also not to forget to add your references. I added mine in the reference section. --[[User:Sblais2|Sblais2]] 19:02, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
My writing skills have never been any good. I&#039;ve read through a whole bunch of the page and fixed a few typos here and there. I want to add to the &amp;quot;Information Maintenance Calls&amp;quot; section, but I can&#039;t promise that it will be any good. So feel free to help me out. -Dlangloi&lt;br /&gt;
&lt;br /&gt;
okay well i am currently working on the information maintenance calls part and i am a little stuck on the type system data so by all means help out.. that being said can anyone give me a hint on a system call that works with a system data in UNIX because i read the manual and i still drawing a blank, thanks-Csulliva&lt;br /&gt;
&lt;br /&gt;
Ya I have also had troubles trying to find system calls that affect system data. I have read through about half of that manuel in the references, but nothing seems to be related. Anyways, my eyes are starting to hurt. I will try again later and see if something turns up. -Dlangloi&lt;br /&gt;
I&#039;ve added to it but I have my doubts that it is correct. Please revise my work as it could be wrong. -Dlangloi&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Just to make sure before I add stuff to the page: are the exec(),fork(), wait(), and exit() calls be considered part of process control calls ? [[User:Apharan2|Apharan2]]&lt;br /&gt;
&lt;br /&gt;
Apharan2=&amp;gt; yes they are considered part of process control as they deal directly with processes. Check this link out for more detail [http://www.softpanorama.org/Internals/unix_system_calls.shtml]. --[[User:Sblais2|Sblais2]] 23:54, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Thanks a lot wasn&#039;t sure  [[User:Apharan2|Apharan2]]&lt;br /&gt;
&lt;br /&gt;
Hopefully you guys like the conclusion, not sure how good it is. Hard to judge your own work i find. Rarteaga&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4701</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4701"/>
		<updated>2010-10-15T10:21:53Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Miscellaneous System Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are &#039;&#039;read&#039;&#039;, &#039;&#039;write&#039;&#039;, &#039;&#039;seek&#039;&#039; and &#039;&#039;stat&#039;&#039;. These were also part of the first UNIX built. The &#039;&#039;read&#039;&#039; and &#039;&#039;write&#039;&#039; system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The &#039;&#039;seek&#039;&#039; system calls is used to go to a specified position in a file. This calls used a 16 bit address to determine its position in a file(also called offset). But this was replaced very quickly for &#039;&#039;lseek&#039;&#039; as early as SVR4. It allows the call to use 32 bit address enabling the users more flexibility when reading or writing files especially for large ones. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement &#039;&#039;lseek64&#039;&#039;, a system call that will use 64 bit addresses. The &#039;&#039;stat&#039;&#039; system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: &#039;&#039;fstat&#039;&#039; and &#039;&#039;lstat&#039;&#039;. They both do the same thing except &#039;&#039;lstat&#039;&#039; give the status of symbolic links and &#039;&#039;fstat&#039;&#039; give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called &#039;&#039;statvfs&#039;&#039; and &#039;&#039;fstatvfs&#039;&#039; were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has &#039;&#039;statfs&#039;&#039; and &#039;&#039;fstatfs&#039;&#039; to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is &#039;&#039;mount&#039;&#039; and &#039;&#039;umount&#039;&#039;. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using the &#039;&#039;clone&#039;&#039; system call with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The &#039;&#039;umount&#039;&#039; system call unmounted the file system from the storage device. The only noteworthy change to &#039;&#039;umount&#039;&#039; was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as &#039;&#039;umount&#039;&#039; except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;open&#039;&#039;, &#039;&#039;read&#039;&#039; and &#039;&#039;write&#039;&#039; calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call &#039;&#039;mmap&#039;&#039;. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, &#039;&#039;ioctl&#039;&#039; system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; which spawns a new process, and &#039;stty&#039; which sets the mode of typewriter.The &#039;wait&#039; system call is used in both as well the only really difference is that in the Linux version wait store status information in a integer which take the integer itself as an argument, not a pointer to itself. In Linux there are a lot more system calls regarding this type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification. The &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface to getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe. A specially named pipe is FIFO. Standing for First In First Out. It is accessed as part of a file system through idea of pipes. Reflecting its name, it processes in a first come first serve basis. It deals with a process and once it is done with it completely, it moves onto the next process.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: These functions all consist of recieving and sending messages from the queue usually involving ID&#039;s. &#039;&#039;msgget()&#039;&#039; acquires a message from the queue identifier relating to the key. Closely related, but not the same the &#039;&#039;msgrcv()&#039;&#039; command is used to recieve a message from the queue relating to the msqid parameter. This parameter involves the ID of where to recieve the message. &#039;&#039;msgsnd()&#039;&#039; sends a message to the queue. This command can be thought of as the reverse of the &#039;&#039;msgget()&#039;&#039;. Lastly, &#039;&#039;msgctl()&#039;&#039; performs message control operations through queries.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Semaphores: This idea of semaphores consists of setting it or checking it. They are used to control access to files. One can use the concept of file locking to get a better understanding of Semaphores. Semaphores aren&#039;t usually held together in singles, but rather in groups. This is done by creating a set that can contain several semaphores through the &#039;&#039;semget()&#039;&#039; command. &#039;&#039; semop()&#039;&#039; decides what we the semaphor to accomplish. I.e) depending on whether we have a positive, 0 or negative value, the semaphor shall be added, will wait, or be blocked until positive, respectively. Semaphores were first thought of by Dijkstra and used in computers in the late 60&#039;s&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Shared Memory: Functions involving shared memory allow the user to be able to access, detach and combine shared addresses. &#039;&#039;shmget()&#039;&#039; command returns the ID for the shared memory region. It can also create it if it doesn&#039;t already exist. &#039;&#039;shmat()&#039;&#039; function attaches the  shared  memory  to the virtual address of the calling process. &#039;&#039;shmdt()&#039;&#039; reverses the &#039;&#039;shmat()&#039;&#039; command, and detaches shared memory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unix and Linux use the same calls for the majority of the functions now, except for a few which are slighly different.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
This category of system calls contains the system calls that do not enough similar calls to form its own group. To avoid random calls floating around, we simply group them into this category. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Directories: These are special files that contain a number of filenames. There are different variations of directories, i.e.) System V, Berkely style directories. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Time: Intuitively, this call allows the user to access the time of day. Specifics on time can be obtained through the structure given by these attributes: &#039;&#039;tm_secint&#039;&#039;, &#039;&#039;tm_min&#039;&#039;, &#039;&#039;tm_hour&#039;&#039;, &#039;&#039;tm_mday&#039;&#039;, &#039;&#039;tm_mon&#039;&#039;, &#039;&#039; tm_year&#039;&#039;, just to list a few.&lt;br /&gt;
Parsing Input: Parsing is often used when the user enters in data and the program must parse this data into appropriate divisions in order to obtain specific parts of the data. I.e.)Seperate words from each other in the program, seperate numbers from characters. There are several different ways a programmer can parse the data in order to achieve specific pieces of the data that are needed to be analyzed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Lastly, there are some system calls which overlap and can be considered in a specific category or mentioned within the Miscellaneous System Calls. Referring to the ``3rd edition, Modern Operating Systems`` textbook, the command, ``chmod`` described above in File Management Calls is considered Miscellaneous. Similarly, the kill() command is mentioned as a Miscellaneous System Call. Hence, it is difficult to decifer whether a system call can be placed into a specific category or simply placed in the ``Other`` bin.&lt;br /&gt;
&lt;br /&gt;
Most of the older OS like Unix, had majority of system calls, but the newer(or modern) OS have more to offer in terms of quantity of system calls. With changes to the syntax a fair bit, it`s not too difficult to transfer over. This is in result of the programmers desiring a portable set of original system calls. Although not all are portable, they try to keep most portable such that less work is involved trying to go from one system to a newer system.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;br /&gt;
&lt;br /&gt;
Tanenbaum, Andrew S. 3rd edition Modern Operating Systems.&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4699</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4699"/>
		<updated>2010-10-15T10:18:40Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Communications Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are &#039;&#039;read&#039;&#039;, &#039;&#039;write&#039;&#039;, &#039;&#039;seek&#039;&#039; and &#039;&#039;stat&#039;&#039;. These were also part of the first UNIX built. The &#039;&#039;read&#039;&#039; and &#039;&#039;write&#039;&#039; system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The &#039;&#039;seek&#039;&#039; system calls is used to go to a specified position in a file. This calls used a 16 bit address to determine its position in a file(also called offset). But this was replaced very quickly for &#039;&#039;lseek&#039;&#039; as early as SVR4. It allows the call to use 32 bit address enabling the users more flexibility when reading or writing files especially for large ones. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement &#039;&#039;lseek64&#039;&#039;, a system call that will use 64 bit addresses. The &#039;&#039;stat&#039;&#039; system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: &#039;&#039;fstat&#039;&#039; and &#039;&#039;lstat&#039;&#039;. They both do the same thing except &#039;&#039;lstat&#039;&#039; give the status of symbolic links and &#039;&#039;fstat&#039;&#039; give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called &#039;&#039;statvfs&#039;&#039; and &#039;&#039;fstatvfs&#039;&#039; were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has &#039;&#039;statfs&#039;&#039; and &#039;&#039;fstatfs&#039;&#039; to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is &#039;&#039;mount&#039;&#039; and &#039;&#039;umount&#039;&#039;. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using the &#039;&#039;clone&#039;&#039; system call with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The &#039;&#039;umount&#039;&#039; system call unmounted the file system from the storage device. The only noteworthy change to &#039;&#039;umount&#039;&#039; was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as &#039;&#039;umount&#039;&#039; except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;open&#039;&#039;, &#039;&#039;read&#039;&#039; and &#039;&#039;write&#039;&#039; calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call &#039;&#039;mmap&#039;&#039;. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, &#039;&#039;ioctl&#039;&#039; system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; which spawns a new process, and &#039;stty&#039; which sets the mode of typewriter.The &#039;wait&#039; system call is used in both as well the only really difference is that in the Linux version wait store status information in a integer which take the integer itself as an argument, not a pointer to itself. In Linux there are a lot more system calls regarding this type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification. The &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface to getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe. A specially named pipe is FIFO. Standing for First In First Out. It is accessed as part of a file system through idea of pipes. Reflecting its name, it processes in a first come first serve basis. It deals with a process and once it is done with it completely, it moves onto the next process.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: These functions all consist of recieving and sending messages from the queue usually involving ID&#039;s. &#039;&#039;msgget()&#039;&#039; acquires a message from the queue identifier relating to the key. Closely related, but not the same the &#039;&#039;msgrcv()&#039;&#039; command is used to recieve a message from the queue relating to the msqid parameter. This parameter involves the ID of where to recieve the message. &#039;&#039;msgsnd()&#039;&#039; sends a message to the queue. This command can be thought of as the reverse of the &#039;&#039;msgget()&#039;&#039;. Lastly, &#039;&#039;msgctl()&#039;&#039; performs message control operations through queries.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Semaphores: This idea of semaphores consists of setting it or checking it. They are used to control access to files. One can use the concept of file locking to get a better understanding of Semaphores. Semaphores aren&#039;t usually held together in singles, but rather in groups. This is done by creating a set that can contain several semaphores through the &#039;&#039;semget()&#039;&#039; command. &#039;&#039; semop()&#039;&#039; decides what we the semaphor to accomplish. I.e) depending on whether we have a positive, 0 or negative value, the semaphor shall be added, will wait, or be blocked until positive, respectively. Semaphores were first thought of by Dijkstra and used in computers in the late 60&#039;s&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Shared Memory: Functions involving shared memory allow the user to be able to access, detach and combine shared addresses. &#039;&#039;shmget()&#039;&#039; command returns the ID for the shared memory region. It can also create it if it doesn&#039;t already exist. &#039;&#039;shmat()&#039;&#039; function attaches the  shared  memory  to the virtual address of the calling process. &#039;&#039;shmdt()&#039;&#039; reverses the &#039;&#039;shmat()&#039;&#039; command, and detaches shared memory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unix and Linux use the same calls for the majority of the functions now, except for a few which are slighly different.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
This category of system calls contains the system calls that do not enough similar calls to form its own group. To avoid random calls floating around, we simply group them into this category. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Directories: These are special files that contain a number of filenames. There are different variations of directories, i.e.) System V, Berkely style directories. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Time: Intuitively, this call allows the user to access the time of day. Specifics on time can be obtained through the structure given by these attributes: &#039;&#039;tm_secint&#039;&#039;, &#039;&#039;tm_min&#039;&#039;, &#039;&#039;tm_hour&#039;&#039;, &#039;&#039;tm_mday&#039;&#039;, &#039;&#039;tm_mon&#039;&#039;, &#039;&#039; tm_year&#039;&#039;, just to list a few.&lt;br /&gt;
Parsing Input: Parsing is often used when the user enters in data and the program must parse this data into appropriate divisions in order to obtain specific parts of the data. I.e.)Seperate words from each other in the program, seperate numbers from characters. There are several different ways a programmer can parse the data in order to achieve specific pieces of the data that are needed to be analyzed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Lastly, there are some system calls which overlap and can be considered in a specific category or mentioned within the Miscellaneous System Calls. Referring to the ``3rd edition, Modern Operating Systems`` textbook, the command, ``chmod`` described above in File Management Calls is considered Miscellaneous. Similarly, the kill() command is mentioned as a Miscellaneous System Call. Hence, it is difficult to decifer whether a system call can be placed into a specific category or simply placed in the ``Other`` bin.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;br /&gt;
&lt;br /&gt;
Tanenbaum, Andrew S. 3rd edition Modern Operating Systems.&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4696</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4696"/>
		<updated>2010-10-15T10:12:19Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are &#039;&#039;read&#039;&#039;, &#039;&#039;write&#039;&#039;, &#039;&#039;seek&#039;&#039; and &#039;&#039;stat&#039;&#039;. These were also part of the first UNIX built. The &#039;&#039;read&#039;&#039; and &#039;&#039;write&#039;&#039; system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The &#039;&#039;seek&#039;&#039; system calls is used to go to a specified position in a file. This calls used a 16 bit address offset. But this was replaced very quickly for &#039;&#039;lseek&#039;&#039; as early as SVR4. It allows the call to use 32 bit address offsets enabling the users more flexibility when accessing or writing to files especially for large ones. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement &#039;&#039;lseek64&#039;&#039;, a system call that will use 64 bit addresses. The &#039;&#039;stat&#039;&#039; system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: &#039;&#039;fstat&#039;&#039; and &#039;&#039;lstat&#039;&#039;. They both do the same thing except &#039;&#039;lstat&#039;&#039; give the status of symbolic links and &#039;&#039;fstat&#039;&#039; give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called &#039;&#039;statvfs&#039;&#039; and &#039;&#039;fstatvfs&#039;&#039; were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has &#039;&#039;statfs&#039;&#039; and &#039;&#039;fstatfs&#039;&#039; to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is &#039;&#039;mount&#039;&#039; and &#039;&#039;umount&#039;&#039;. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using the &#039;&#039;clone&#039;&#039; system call with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The &#039;&#039;umount&#039;&#039; system call unmounted the file system from the storage device. The only noteworthy change to &#039;&#039;umount&#039;&#039; was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as &#039;&#039;umount&#039;&#039; except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;open&#039;&#039;, &#039;&#039;read&#039;&#039; and &#039;&#039;write&#039;&#039; calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call &#039;&#039;mmap&#039;&#039;. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, &#039;&#039;ioctl&#039;&#039; system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; which spawns a new process, and &#039;stty&#039; which sets the mode of typewriter.The &#039;wait&#039; system call is used in both as well the only really difference is that in the Linux version wait store status information in a integer which take the integer itself as an argument, not a pointer to itself. In Linux there are a lot more system calls regarding this type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification. The &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface to getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe. A specially named pipe is FIFO. Standing First In First Out. It is accessed as part of a file system through idea of pipes.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: These functions all consist of recieving and sending messages from the queue usually involving ID&#039;s. &#039;&#039;msgget()&#039;&#039; acquires a message from the queue identifier relating to the key. Closely related, but not the same the &#039;&#039;msgrcv()&#039;&#039; command is used to recieve a message from the queue relating to the msqid parameter. This parameter involves the ID of where to recieve the message. &#039;&#039;msgsnd()&#039;&#039; sends a message to the queue. This command can be thought of as the reverse of the &#039;&#039;msgget()&#039;&#039;. Lastly, &#039;&#039;msgctl()&#039;&#039; performs message control operations through queries.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Semaphores: This idea of semaphores consists of setting it or checking it. They are used to control access to files. One can use the concept of file locking to get a better understanding of Semaphores. Semaphores aren&#039;t usually held together in singles, but rather in groups. This is done by creating a set that can contain several semaphores through the &#039;&#039;semget()&#039;&#039; command. &#039;&#039; semop()&#039;&#039; decides what we the semaphor to accomplish. I.e) depending on whether we have a positive, 0 or negative value, the semaphor shall be added, will wait, or be blocked until positive, respectively. Semaphores were first thought of by Dijkstra and used in computers in the late 60&#039;s&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Shared Memory: Functions involving shared memory allow the user to be able to access, detach and combine shared addresses. &#039;&#039;shmget()&#039;&#039; command returns the ID for the shared memory region. It can also create it if it doesn&#039;t already exist. &#039;&#039;shmat()&#039;&#039; function attaches the  shared  memory  to the virtual address of the calling process. &#039;&#039;shmdt()&#039;&#039; reverses the &#039;&#039;shmat()&#039;&#039; command, and detaches shared memory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unix and Linux use the same calls for the majority of the functions now, except for a few which are slighly different.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
This category of system calls contains the system calls that do not enough similar calls to form its own group. To avoid random calls floating around, we simply group them into this category. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Directories: These are special files that contain a number of filenames. There are different variations of directories, i.e.) System V, Berkely style directories. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Time: Intuitively, this call allows the user to access the time of day. Specifics on time can be obtained through the structure given by these attributes: &#039;&#039;tm_secint&#039;&#039;, &#039;&#039;tm_min&#039;&#039;, &#039;&#039;tm_hour&#039;&#039;, &#039;&#039;tm_mday&#039;&#039;, &#039;&#039;tm_mon&#039;&#039;, &#039;&#039; tm_year&#039;&#039;, just to list a few.&lt;br /&gt;
Parsing Input: Parsing is often used when the user enters in data and the program must parse this data into appropriate divisions in order to obtain specific parts of the data. I.e.)Seperate words from each other in the program, seperate numbers from characters. There are several different ways a programmer can parse the data in order to achieve specific pieces of the data that are needed to be analyzed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Lastly, there are some system calls which overlap and can be considered in a specific category or mentioned within the Miscellaneous System Calls. Referring to the ``3rd edition, Modern Operating Systems`` textbook, the command, ``chmod`` described above in File Management Calls is considered Miscellaneous. Similarly, the kill() command is mentioned as a Miscellaneous System Call. Hence, it is difficult to decifer whether a system call can be placed into a specific category or simply placed in the ``Other`` bin.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;br /&gt;
&lt;br /&gt;
Tanenbaum, Andrew S. 3rd edition Modern Operating Systems.&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4693</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4693"/>
		<updated>2010-10-15T10:05:50Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were also part of the first UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address offset. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit address offsets. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; which spawns a new process, and &#039;stty&#039; which sets the mode of typewriter.The &#039;wait&#039; system call is used in both as well the only really difference is that in the Linux version wait store status information in a integer which take the integer itself as an argument, not a pointer to itself. In Linux there are a lot more system calls regarding this type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification. The &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface to getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe. A specially named pipe is FIFO. Standing First In First Out. It is accessed as part of a file system through idea of pipes.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: These functions all consist of recieving and sending messages from the queue usually involving ID&#039;s. &#039;&#039;msgget()&#039;&#039; acquires a message from the queue identifier relating to the key. Closely related, but not the same the &#039;&#039;msgrcv()&#039;&#039; command is used to recieve a message from the queue relating to the msqid parameter. This parameter involves the ID of where to recieve the message. &#039;&#039;msgsnd()&#039;&#039; sends a message to the queue. This command can be thought of as the reverse of the &#039;&#039;msgget()&#039;&#039;. Lastly, &#039;&#039;msgctl()&#039;&#039; performs message control operations through queries.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Semaphores: This idea of semaphores consists of setting it or checking it. They are used to control access to files. One can use the concept of file locking to get a better understanding of Semaphores. Semaphores aren&#039;t usually held together in singles, but rather in groups. This is done by creating a set that can contain several semaphores through the &#039;&#039;semget()&#039;&#039; command. &#039;&#039; semop()&#039;&#039; decides what we the semaphor to accomplish. I.e) depending on whether we have a positive, 0 or negative value, the semaphor shall be added, will wait, or be blocked until positive, respectively. Semaphores were first thought of by Dijkstra and used in computers in the late 60&#039;s&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Shared Memory: Functions involving shared memory allow the user to be able to access, detach and combine shared addresses. &#039;&#039;shmget()&#039;&#039; command returns the ID for the shared memory region. It can also create it if it doesn&#039;t already exist. &#039;&#039;shmat()&#039;&#039; function attaches the  shared  memory  to the virtual address of the calling process. &#039;&#039;shmdt()&#039;&#039; reverses the &#039;&#039;shmat()&#039;&#039; command, and detaches shared memory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unix and Linux use the same calls for the majority of the functions now, except for a few which are slighly different.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
This category of system calls contains the system calls that do not enough similar calls to form its own group. To avoid random calls floating around, we simply group them into this category. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Directories: These are special files that contain a number of filenames. There are different variations of directories, i.e.) System V, Berkely style directories. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Time: Intuitively, this call allows the user to access the time of day. Specifics on time can be obtained through the structure given by these attributes: &#039;&#039;tm_secint&#039;&#039;, &#039;&#039;tm_min&#039;&#039;, &#039;&#039;tm_hour&#039;&#039;, &#039;&#039;tm_mday&#039;&#039;, &#039;&#039;tm_mon&#039;&#039;, &#039;&#039; tm_year&#039;&#039;, just to list a few.&lt;br /&gt;
Parsing Input: Parsing is often used when the user enters in data and the program must parse this data into appropriate divisions in order to obtain specific parts of the data. I.e.)Seperate words from each other in the program, seperate numbers from characters. There are several different ways a programmer can parse the data in order to achieve specific pieces of the data that are needed to be analyzed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Lastly, there are some system calls which overlap and can be considered in a specific category or mentioned within the Miscellaneous System Calls. Referring to the ``3rd edition, Modern Operating Systems`` textbook, the command, ``chmod`` described above in File Management Calls is considered Miscellaneous. Similarly, the kill() command is mentioned as a Miscellaneous System Call. Hence, it is difficult to decifer whether a system call can be placed into a specific category or simply placed in the ``Other`` bin.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4691</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4691"/>
		<updated>2010-10-15T10:03:46Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Communications Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were also part of the first UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address offset. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit address offsets. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; which spawns a new process, and &#039;stty&#039; which sets the mode of typewriter.The &#039;wait&#039; system call is used in both as well the only really difference is that in the Linux version wait store status information in a integer which take the integer itself as an argument, not a pointer to itself. In Linux there are a lot more system calls regarding this type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification. The &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface to getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe. A specially named pipe is FIFO. Standing First In First Out. It is accessed as part of a file system through idea of pipes.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: These functions all consist of recieving and sending messages from the queue usually involving ID&#039;s. &#039;&#039;msgget()&#039;&#039; acquires a message from the queue identifier relating to the key. Closely related, but not the same the &#039;&#039;msgrcv()&#039;&#039; command is used to recieve a message from the queue relating to the msqid parameter. This parameter involves the ID of where to recieve the message. &#039;&#039;msgsnd()&#039;&#039; sends a message to the queue. This command can be thought of as the reverse of the &#039;&#039;msgget()&#039;&#039;. Lastly, &#039;&#039;msgctl()&#039;&#039; performs message control operations through queries.  &lt;br /&gt;
&lt;br /&gt;
Semaphores: This idea of semaphores consists of setting it or checking it. They are used to control access to files. One can use the concept of file locking to get a better understanding of Semaphores. Semaphores aren&#039;t usually held together in singles, but rather in groups. This is done by creating a set that can contain several semaphores through the &#039;&#039;semget()&#039;&#039; command. &#039;&#039; semop()&#039;&#039; decides what we the semaphor to accomplish. I.e) depending on whether we have a positive, 0 or negative value, the semaphor shall be added, will wait, or be blocked until positive, respectively. Semaphores were first thought of by Dijkstra and used in computers in the late 60&#039;s&lt;br /&gt;
&lt;br /&gt;
Shared Memory: Functions involving shared memory allow the user to be able to access, detach and combine shared addresses. &#039;&#039;shmget()&#039;&#039; command returns the ID for the shared memory region. It can also create it if it doesn&#039;t already exist. &#039;&#039;shmat()&#039;&#039; function attaches the  shared  memory  to the virtual address of the calling process. &#039;&#039;shmdt()&#039;&#039; reverses the &#039;&#039;shmat()&#039;&#039; command, and detaches shared memory.&lt;br /&gt;
&lt;br /&gt;
Unix and Linux use the same calls for the majority of the functions now, except for a few which are slighly different.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
This category of system calls contains the system calls that do not enough similar calls to form its own group. To avoid random calls floating around, we simply group them into this category. &lt;br /&gt;
&lt;br /&gt;
Directories: These are special files that contain a number of filenames. There are different variations of directories, i.e.) System V, Berkely style directories. &lt;br /&gt;
&lt;br /&gt;
Time: Intuitively, this call allows the user to access the time of day. Specifics on time can be obtained through the structure given by these attributes: &#039;&#039;tm_secint&#039;&#039;, &#039;&#039;tm_min&#039;&#039;, &#039;&#039;tm_hour&#039;&#039;, &#039;&#039;tm_mday&#039;&#039;, &#039;&#039;tm_mon&#039;&#039;, &#039;&#039; tm_year&#039;&#039;, just to list a few.&lt;br /&gt;
Parsing Input: Parsing is often used when the user enters in data and the program must parse this data into appropriate divisions in order to obtain specific parts of the data. I.e.)Seperate words from each other in the program, seperate numbers from characters. There are several different ways a programmer can parse the data in order to achieve specific pieces of the data that are needed to be analyzed.&lt;br /&gt;
&lt;br /&gt;
Lastly, there are some system calls which overlap and can be considered in a specific category or mentioned within the Miscellaneous System Calls. Referring to the ``3rd edition, Modern Operating Systems`` textbook, the command, ``chmod`` described above in File Management Calls is considered Miscellaneous. Similarly, the kill() command is mentioned as a Miscellaneous System Call. Hence, it is difficult to decifer whether a system call can be placed into a specific category or simply placed in the ``Other`` bin.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4688</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4688"/>
		<updated>2010-10-15T09:56:02Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Miscellaneous System Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were also part of the first UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address offset. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit address offsets. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; which spawns a new process, and &#039;stty&#039; which sets the mode of typewriter.The &#039;wait&#039; system call is used in both as well the only really difference is that in the Linux version wait store status information in a integer which take the integer itself as an argument, not a pointer to itself. In Linux there are a lot more system calls regarding this type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification. The &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface to getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: These functions all consist of recieving and sending messages from the queue usually involving ID&#039;s. &#039;&#039;msgget()&#039;&#039; acquires a message from the queue identifier relating to the key. Closely related, but not the same the &#039;&#039;msgrcv()&#039;&#039; command is used to recieve a message from the queue relating to the msqid parameter. This parameter involves the ID of where to recieve the message. &#039;&#039;msgsnd()&#039;&#039; sends a message to the queue. This command can be thought of as the reverse of the &#039;&#039;msgget()&#039;&#039;. Lastly, &#039;&#039;msgctl()&#039;&#039; performs message control operations through queries.  &lt;br /&gt;
&lt;br /&gt;
Semaphores: This idea of semaphores consists of setting it or checking it. They are used to control access to files. One can use the concept of file locking to get a better understanding of Semaphores. Semaphores aren&#039;t usually held together in singles, but rather in groups. This is done by creating a set that can contain several semaphores through the &#039;&#039;semget()&#039;&#039; command. &#039;&#039; semop()&#039;&#039; decides what we the semaphor to accomplish. I.e) depending on whether we have a positive, 0 or negative value, the semaphor shall be added, will wait, or be blocked until positive, respectively. Semaphores were first thought of by Dijkstra and used in computers in the late 60&#039;s&lt;br /&gt;
&lt;br /&gt;
Shared Memory: Functions involving shared memory allow the user to be able to access, detach and combine shared addresses. &#039;&#039;shmget()&#039;&#039; command returns the ID for the shared memory region. It can also create it if it doesn&#039;t already exist. &#039;&#039;shmat()&#039;&#039; function attaches the  shared  memory  to the virtual address of the calling process. &#039;&#039;shmdt()&#039;&#039; reverses the &#039;&#039;shmat()&#039;&#039; command, and detaches shared memory.&lt;br /&gt;
&lt;br /&gt;
Unix and Linux use the same calls for the majority of the functions now, except for a few which are slighly different.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
This category of system calls contains the system calls that do not enough similar calls to form its own group. To avoid random calls floating around, we simply group them into this category. &lt;br /&gt;
&lt;br /&gt;
Directories: These are special files that contain a number of filenames. There are different variations of directories, i.e.) System V, Berkely style directories. &lt;br /&gt;
&lt;br /&gt;
Time: Intuitively, this call allows the user to access the time of day. Specifics on time can be obtained through the structure given by these attributes: &#039;&#039;tm_secint&#039;&#039;, &#039;&#039;tm_min&#039;&#039;, &#039;&#039;tm_hour&#039;&#039;, &#039;&#039;tm_mday&#039;&#039;, &#039;&#039;tm_mon&#039;&#039;, &#039;&#039; tm_year&#039;&#039;, just to list a few.&lt;br /&gt;
Parsing Input: Parsing is often used when the user enters in data and the program must parse this data into appropriate divisions in order to obtain specific parts of the data. I.e.)Seperate words from each other in the program, seperate numbers from characters. There are several different ways a programmer can parse the data in order to achieve specific pieces of the data that are needed to be analyzed.&lt;br /&gt;
&lt;br /&gt;
Lastly, there are some system calls which overlap and can be considered in a specific category or mentioned within the Miscellaneous System Calls. Referring to the ``3rd edition, Modern Operating Systems`` textbook, the command, ``chmod`` described above in File Management Calls is considered Miscellaneous. Similarly, the kill() command is mentioned as a Miscellaneous System Call. Hence, it is difficult to decifer whether a system call can be placed into a specific category or simply placed in the ``Other`` bin.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4687</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4687"/>
		<updated>2010-10-15T09:54:01Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Miscellaneous System Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were also part of the first UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address offset. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit address offsets. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; which spawns a new process, and &#039;stty&#039; which sets the mode of typewriter.The &#039;wait&#039; system call is used in both as well the only really difference is that in the Linux version wait store status information in a integer which take the integer itself as an argument, not a pointer to itself. In Linux there are a lot more system calls regarding this type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification. The &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface to getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: These functions all consist of recieving and sending messages from the queue usually involving ID&#039;s. &#039;&#039;msgget()&#039;&#039; acquires a message from the queue identifier relating to the key. Closely related, but not the same the &#039;&#039;msgrcv()&#039;&#039; command is used to recieve a message from the queue relating to the msqid parameter. This parameter involves the ID of where to recieve the message. &#039;&#039;msgsnd()&#039;&#039; sends a message to the queue. This command can be thought of as the reverse of the &#039;&#039;msgget()&#039;&#039;. Lastly, &#039;&#039;msgctl()&#039;&#039; performs message control operations through queries.  &lt;br /&gt;
&lt;br /&gt;
Semaphores: This idea of semaphores consists of setting it or checking it. They are used to control access to files. One can use the concept of file locking to get a better understanding of Semaphores. Semaphores aren&#039;t usually held together in singles, but rather in groups. This is done by creating a set that can contain several semaphores through the &#039;&#039;semget()&#039;&#039; command. &#039;&#039; semop()&#039;&#039; decides what we the semaphor to accomplish. I.e) depending on whether we have a positive, 0 or negative value, the semaphor shall be added, will wait, or be blocked until positive, respectively. Semaphores were first thought of by Dijkstra and used in computers in the late 60&#039;s&lt;br /&gt;
&lt;br /&gt;
Shared Memory: Functions involving shared memory allow the user to be able to access, detach and combine shared addresses. &#039;&#039;shmget()&#039;&#039; command returns the ID for the shared memory region. It can also create it if it doesn&#039;t already exist. &#039;&#039;shmat()&#039;&#039; function attaches the  shared  memory  to the virtual address of the calling process. &#039;&#039;shmdt()&#039;&#039; reverses the &#039;&#039;shmat()&#039;&#039; command, and detaches shared memory.&lt;br /&gt;
&lt;br /&gt;
Unix and Linux use the same calls for the majority of the functions now, except for a few which are slighly different.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
This category of system calls contains the system calls that do not enough similar calls to form its own group. To avoid random calls floating around, we simply group them into this category. &lt;br /&gt;
&lt;br /&gt;
Directories: These are special files that contain a number of filenames. There are different variations of directories, i.e.) System V, Berkely style directories. &lt;br /&gt;
&lt;br /&gt;
Time: Intuitively, this call allows the user to access the time of day. Specifics on time can be obtained through the structure given by these attributes: &#039;&#039;tm_secint&#039;&#039;, &#039;&#039;tm_min&#039;&#039;, &#039;&#039;tm_hour&#039;&#039;, &#039;&#039;tm_mday&#039;&#039;, &#039;&#039;tm_mon&#039;&#039;, &#039;&#039; tm_year&#039;&#039;, just to list a few.&lt;br /&gt;
Parsing Input: Parsing is often used when the user enters in data and the program must parse this data into appropriate divisions in order to obtain specific parts of the data. I.e.)Seperate words from each other in the program, seperate numbers from characters. There are several different ways a programmer can parse the data in order to achieve specific pieces of the data that are needed to be analyzed.&lt;br /&gt;
&lt;br /&gt;
Lastly, there are some system calls which overlap and can be considered in a specific category or mentioned within the Miscellaneous System Calls. Referring to the ``3rd edition, Modern Operating Systems`` textbook, the command, ``chmod`` described above in file protection is considered Miscellaneous. Similarly, the kill() command is mentioned as a Miscellaneous System Call. Hence, it is difficult to decifer whether a system call can be placed into a specific category or simply placed in the ``Other`` bin.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4685</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4685"/>
		<updated>2010-10-15T09:44:24Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were also part of the first UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address offset. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit address offsets. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; which spawns a new process, and &#039;stty&#039; which sets the mode of typewriter.The &#039;wait&#039; system call is used in both as well the only really difference is that in the Linux version wait store status information in a integer which take the integer itself as an argument, not a pointer to itself. In Linux there are a lot more system calls regarding this type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification. The &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface to getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: These functions all consist of recieving and sending messages from the queue usually involving ID&#039;s. &#039;&#039;msgget()&#039;&#039; acquires a message from the queue identifier relating to the key. Closely related, but not the same the &#039;&#039;msgrcv()&#039;&#039; command is used to recieve a message from the queue relating to the msqid parameter. This parameter involves the ID of where to recieve the message. &#039;&#039;msgsnd()&#039;&#039; sends a message to the queue. This command can be thought of as the reverse of the &#039;&#039;msgget()&#039;&#039;. Lastly, &#039;&#039;msgctl()&#039;&#039; performs message control operations through queries.  &lt;br /&gt;
&lt;br /&gt;
Semaphores: This idea of semaphores consists of setting it or checking it. They are used to control access to files. One can use the concept of file locking to get a better understanding of Semaphores. Semaphores aren&#039;t usually held together in singles, but rather in groups. This is done by creating a set that can contain several semaphores through the &#039;&#039;semget()&#039;&#039; command. &#039;&#039; semop()&#039;&#039; decides what we the semaphor to accomplish. I.e) depending on whether we have a positive, 0 or negative value, the semaphor shall be added, will wait, or be blocked until positive, respectively. Semaphores were first thought of by Dijkstra and used in computers in the late 60&#039;s&lt;br /&gt;
&lt;br /&gt;
Shared Memory: Functions involving shared memory allow the user to be able to access, detach and combine shared addresses. &#039;&#039;shmget()&#039;&#039; command returns the ID for the shared memory region. It can also create it if it doesn&#039;t already exist. &#039;&#039;shmat()&#039;&#039; function attaches the  shared  memory  to the virtual address of the calling process. &#039;&#039;shmdt()&#039;&#039; reverses the &#039;&#039;shmat()&#039;&#039; command, and detaches shared memory.&lt;br /&gt;
&lt;br /&gt;
Unix and Linux use the same calls for the majority of the functions now, except for a few which are slighly different.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
This category of system calls contains the system calls that do not enough similar calls to form its own group. To avoid random calls floating around, we simply group them into this category. &lt;br /&gt;
&lt;br /&gt;
Directories: These are special files that contain a number of filenames. There are different variations of directories, i.e.) System V, Berkely style directories. &lt;br /&gt;
&lt;br /&gt;
Time: Intuitively, this call allows the user to access the time of day. Specifics on time can be obtained through the structure given by these attributes: &#039;&#039;tm_secint&#039;&#039;, &#039;&#039;tm_min&#039;&#039;, &#039;&#039;tm_hour&#039;&#039;, &#039;&#039;tm_mday&#039;&#039;, &#039;&#039;tm_mon&#039;&#039;, &#039;&#039; tm_year&#039;&#039;, just to list a few.&lt;br /&gt;
Parsing Input: Parsing is often used when the user enters in data and the program must parse this data into appropriate divisions in order to obtain specific parts of the data. I.e.)Seperate words from each other in the program, seperate numbers from characters. There are several different ways a programmer can parse the data in order to achieve specific pieces of the data that are needed to be analyzed.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4664</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4664"/>
		<updated>2010-10-15T09:04:34Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Miscellaneous System Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were also part of the first UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address offset. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit address offsets. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; which spawns a new process, and &#039;stty&#039; which sets the mode of typewriter.The &#039;wait&#039; system call is used in both as well the only really difference is that in the Linux version wait store status information in a integer which take the integer itself as an argument, not a pointer to itself. In Linux there are a lot more system calls regarding this type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification. The &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface to getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: These functions all consist of recieving and sending messages from the queue usually involving ID&#039;s. &#039;&#039;msgget()&#039;&#039; acquires a message from the queue identifier relating to the key. Closely related, but not the same the &#039;&#039;msgrcv()&#039;&#039; command is used to recieve a message from the queue relating to the msqid parameter. This parameter involves the ID of where to recieve the message. &#039;&#039;msgsnd()&#039;&#039; sends a message to the queue. This command can be thought of as the reverse of the &#039;&#039;msgget()&#039;&#039;. Lastly, &#039;&#039;msgctl()&#039;&#039; performs message control operations through queries.  &lt;br /&gt;
&lt;br /&gt;
Semaphores: This idea of semaphores consists of setting it or checking it. They are used to control access to files. One can use the concept of file locking to get a better understanding of Semaphores. Semaphores aren&#039;t usually held together in singles, but rather in groups. This is done by creating a set that can contain several semaphores through the &#039;&#039;semget()&#039;&#039; command. &#039;&#039; semop()&#039;&#039; decides what we the semaphor to accomplish. I.e) depending on whether we have a positive, 0 or negative value, the semaphor shall be added, will wait, or be blocked until positive, respectively. Semaphores were first thought of by Dijkstra and used in computers in the late 60&#039;s&lt;br /&gt;
&lt;br /&gt;
Shared Memory: Functions involving shared memory allow the user to be able to access, detach and combine shared addresses. &#039;&#039;shmget()&#039;&#039; command returns the ID for the shared memory region. It can also create it if it doesn&#039;t already exist. &#039;&#039;shmat()&#039;&#039; function attaches the  shared  memory  to the virtual address of the calling process. &#039;&#039;shmdt()&#039;&#039; reverses the &#039;&#039;shmat()&#039;&#039; command, and detaches shared memory.&lt;br /&gt;
&lt;br /&gt;
Unix and Linux use the same calls for the majority of the functions now, except for a few which are slighly different.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
This category of system calls contains the system calls that do not enough similar calls to form its own group. To avoid random calls floating around, we simply group them into this category. &lt;br /&gt;
&lt;br /&gt;
Directories: These are special files that contain a number of filenames. There are different variations of directories, i.e.) System V, Berkely style directories. &lt;br /&gt;
&lt;br /&gt;
Time: Intuitively, this call allows the user to access the time of day. Specifics on time can be obtained through the structure given by these attributes: &#039;&#039;tm_secint&#039;&#039;, &#039;&#039;tm_min&#039;&#039;, &#039;&#039;tm_hour&#039;&#039;, &#039;&#039;tm_mday&#039;&#039;, &#039;&#039;tm_mon&#039;&#039;, &#039;&#039; tm_year&#039;&#039;, just to list a few.&lt;br /&gt;
Parsing Input: Parsing is often used when the user enters in data and the program must parse this data into appropriate divisions in order to obtain specific parts of the data. I.e.)Seperate words from each other in the program, seperate numbers from characters. There are several different ways a programmer can parse the data in order to achieve specific pieces of the data that are needed to be analyzed.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality. &lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4663</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4663"/>
		<updated>2010-10-15T09:04:20Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Miscellaneous System Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were also part of the first UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address offset. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit address offsets. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; which spawns a new process, and &#039;stty&#039; which sets the mode of typewriter.The &#039;wait&#039; system call is used in both as well the only really difference is that in the Linux version wait store status information in a integer which take the integer itself as an argument, not a pointer to itself. In Linux there are a lot more system calls regarding this type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification. The &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface to getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: These functions all consist of recieving and sending messages from the queue usually involving ID&#039;s. &#039;&#039;msgget()&#039;&#039; acquires a message from the queue identifier relating to the key. Closely related, but not the same the &#039;&#039;msgrcv()&#039;&#039; command is used to recieve a message from the queue relating to the msqid parameter. This parameter involves the ID of where to recieve the message. &#039;&#039;msgsnd()&#039;&#039; sends a message to the queue. This command can be thought of as the reverse of the &#039;&#039;msgget()&#039;&#039;. Lastly, &#039;&#039;msgctl()&#039;&#039; performs message control operations through queries.  &lt;br /&gt;
&lt;br /&gt;
Semaphores: This idea of semaphores consists of setting it or checking it. They are used to control access to files. One can use the concept of file locking to get a better understanding of Semaphores. Semaphores aren&#039;t usually held together in singles, but rather in groups. This is done by creating a set that can contain several semaphores through the &#039;&#039;semget()&#039;&#039; command. &#039;&#039; semop()&#039;&#039; decides what we the semaphor to accomplish. I.e) depending on whether we have a positive, 0 or negative value, the semaphor shall be added, will wait, or be blocked until positive, respectively. Semaphores were first thought of by Dijkstra and used in computers in the late 60&#039;s&lt;br /&gt;
&lt;br /&gt;
Shared Memory: Functions involving shared memory allow the user to be able to access, detach and combine shared addresses. &#039;&#039;shmget()&#039;&#039; command returns the ID for the shared memory region. It can also create it if it doesn&#039;t already exist. &#039;&#039;shmat()&#039;&#039; function attaches the  shared  memory  to the virtual address of the calling process. &#039;&#039;shmdt()&#039;&#039; reverses the &#039;&#039;shmat()&#039;&#039; command, and detaches shared memory.&lt;br /&gt;
&lt;br /&gt;
Unix and Linux use the same calls for the majority of the functions now, except for a few which are slighly different.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
This category of system calls contains the system calls that do not enough similar calls to form its own group. To avoid random calls floating around, we simply group them into this category. &lt;br /&gt;
Directories: These are special files that contain a number of filenames. There are different variations of directories, i.e.) System V, Berkely style directories. &lt;br /&gt;
&lt;br /&gt;
Time: Intuitively, this call allows the user to access the time of day. Specifics on time can be obtained through the structure given by these attributes: &#039;&#039;tm_secint&#039;&#039;, &#039;&#039;tm_min&#039;&#039;, &#039;&#039;tm_hour&#039;&#039;, &#039;&#039;tm_mday&#039;&#039;, &#039;&#039;tm_mon&#039;&#039;, &#039;&#039; tm_year&#039;&#039;, just to list a few.&lt;br /&gt;
Parsing Input: Parsing is often used when the user enters in data and the program must parse this data into appropriate divisions in order to obtain specific parts of the data. I.e.)Seperate words from each other in the program, seperate numbers from characters. There are several different ways a programmer can parse the data in order to achieve specific pieces of the data that are needed to be analyzed.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality. &lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4659</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4659"/>
		<updated>2010-10-15T08:56:53Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Miscellaneous System Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were also part of the first UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address offset. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit address offsets. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; which spawns a new process, and &#039;stty&#039; which sets the mode of typewriter.The &#039;wait&#039; system call is used in both as well the only really difference is that in the Linux version wait store status information in a integer which take the integer itself as an argument, not a pointer to itself. In Linux there are a lot more system calls regarding this type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification. The &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface to getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: These functions all consist of recieving and sending messages from the queue usually involving ID&#039;s. &#039;&#039;msgget()&#039;&#039; acquires a message from the queue identifier relating to the key. Closely related, but not the same the &#039;&#039;msgrcv()&#039;&#039; command is used to recieve a message from the queue relating to the msqid parameter. This parameter involves the ID of where to recieve the message. &#039;&#039;msgsnd()&#039;&#039; sends a message to the queue. This command can be thought of as the reverse of the &#039;&#039;msgget()&#039;&#039;. Lastly, &#039;&#039;msgctl()&#039;&#039; performs message control operations through queries.  &lt;br /&gt;
&lt;br /&gt;
Semaphores: This idea of semaphores consists of setting it or checking it. They are used to control access to files. One can use the concept of file locking to get a better understanding of Semaphores. Semaphores aren&#039;t usually held together in singles, but rather in groups. This is done by creating a set that can contain several semaphores through the &#039;&#039;semget()&#039;&#039; command. &#039;&#039; semop()&#039;&#039; decides what we the semaphor to accomplish. I.e) depending on whether we have a positive, 0 or negative value, the semaphor shall be added, will wait, or be blocked until positive, respectively. Semaphores were first thought of by Dijkstra and used in computers in the late 60&#039;s&lt;br /&gt;
&lt;br /&gt;
Shared Memory: Functions involving shared memory allow the user to be able to access, detach and combine shared addresses. &#039;&#039;shmget()&#039;&#039; command returns the ID for the shared memory region. It can also create it if it doesn&#039;t already exist. &#039;&#039;shmat()&#039;&#039; function attaches the  shared  memory  to the virtual address of the calling process. &#039;&#039;shmdt()&#039;&#039; reverses the &#039;&#039;shmat()&#039;&#039; command, and detaches shared memory.&lt;br /&gt;
&lt;br /&gt;
Unix and Linux use the same calls for the majority of the functions now, except for a few which are slighly different.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
This category of system calls contains the system calls that do not enough similar calls to form its own group. To avoid random calls floating around, we simply group them into this category. &lt;br /&gt;
Directories: These are special files that contain a number of filenames. There are different variations of directories, i.e.) System V, Berkely style directories. &lt;br /&gt;
Time: Intuitively, this call allows the user to access the time of day. Specifics on time can be obtained through the structure given by these attributes: &#039;&#039;tm_secint&#039;&#039;, &#039;&#039;tm_min&#039;&#039;, &#039;&#039;tm_hour&#039;&#039;, &#039;&#039;tm_mday&#039;&#039;, &#039;&#039;tm_mon&#039;&#039;, &#039;&#039; tm_year&#039;&#039;, just to list a few.&lt;br /&gt;
Parsing Input: Parsing is often used when the user enters in data and the program must parse this data into appropriate divisions in order to obtain specific parts of the data. I.e.)Seperate words from each other in the program, seperate numbers from characters. There are several different ways a programmer can parse the data in order to achieve specific pieces of the data that are needed to be analyzed.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality. &lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4656</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4656"/>
		<updated>2010-10-15T08:50:42Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Miscellaneous System Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were also part of the first UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address offset. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit address offsets. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; which spawns a new process, and &#039;stty&#039; which sets the mode of typewriter.The &#039;wait&#039; system call is used in both as well the only really difference is that in the Linux version wait store status information in a integer which take the integer itself as an argument, not a pointer to itself. In Linux there are a lot more system calls regarding this type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification. The &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface to getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: These functions all consist of recieving and sending messages from the queue usually involving ID&#039;s. &#039;&#039;msgget()&#039;&#039; acquires a message from the queue identifier relating to the key. Closely related, but not the same the &#039;&#039;msgrcv()&#039;&#039; command is used to recieve a message from the queue relating to the msqid parameter. This parameter involves the ID of where to recieve the message. &#039;&#039;msgsnd()&#039;&#039; sends a message to the queue. This command can be thought of as the reverse of the &#039;&#039;msgget()&#039;&#039;. Lastly, &#039;&#039;msgctl()&#039;&#039; performs message control operations through queries.  &lt;br /&gt;
&lt;br /&gt;
Semaphores: This idea of semaphores consists of setting it or checking it. They are used to control access to files. One can use the concept of file locking to get a better understanding of Semaphores. Semaphores aren&#039;t usually held together in singles, but rather in groups. This is done by creating a set that can contain several semaphores through the &#039;&#039;semget()&#039;&#039; command. &#039;&#039; semop()&#039;&#039; decides what we the semaphor to accomplish. I.e) depending on whether we have a positive, 0 or negative value, the semaphor shall be added, will wait, or be blocked until positive, respectively. Semaphores were first thought of by Dijkstra and used in computers in the late 60&#039;s&lt;br /&gt;
&lt;br /&gt;
Shared Memory: Functions involving shared memory allow the user to be able to access, detach and combine shared addresses. &#039;&#039;shmget()&#039;&#039; command returns the ID for the shared memory region. It can also create it if it doesn&#039;t already exist. &#039;&#039;shmat()&#039;&#039; function attaches the  shared  memory  to the virtual address of the calling process. &#039;&#039;shmdt()&#039;&#039; reverses the &#039;&#039;shmat()&#039;&#039; command, and detaches shared memory.&lt;br /&gt;
&lt;br /&gt;
Unix and Linux use the same calls for the majority of the functions now, except for a few which are slighly different.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
This category of system calls contains the system calls that do not enough similar calls to form its own group. To avoid random calls floating around, we simply group them into this category. &lt;br /&gt;
Directories: These are special files that contain a number of filenames. There are different variations of directories, i.e.) System V, Berkely style directories. &lt;br /&gt;
Time: Intuitively, this call allows the user to access the time of day. Specifics on time can be obtained through the structure given by these attributes: &#039;&#039;tm_secint&#039;&#039;, &#039;&#039;tm_min&#039;&#039;, &#039;&#039;tm_hour&#039;&#039;, &#039;&#039;tm_mday&#039;&#039;, &#039;&#039;tm_mon&#039;&#039;, &#039;&#039; tm_year&#039;&#039;, just to list a few.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality. &lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4642</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4642"/>
		<updated>2010-10-15T08:31:21Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Communications Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were also part of the first UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address offset. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit address offsets. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; which spawns a new process, and &#039;stty&#039; which sets the mode of typewriter.The &#039;wait&#039; system call is used in both as well the only really difference is that in the Linux version wait store status information in a integer which take the integer itself as an argument, not a pointer to itself. In Linux there are a lot more system calls regarding this type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification. The &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface to getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: These functions all consist of recieving and sending messages from the queue usually involving ID&#039;s. &#039;&#039;msgget()&#039;&#039; acquires a message from the queue identifier relating to the key. Closely related, but not the same the &#039;&#039;msgrcv()&#039;&#039; command is used to recieve a message from the queue relating to the msqid parameter. This parameter involves the ID of where to recieve the message. &#039;&#039;msgsnd()&#039;&#039; sends a message to the queue. This command can be thought of as the reverse of the &#039;&#039;msgget()&#039;&#039;. Lastly, &#039;&#039;msgctl()&#039;&#039; performs message control operations through queries.  &lt;br /&gt;
&lt;br /&gt;
Semaphores: This idea of semaphores consists of setting it or checking it. They are used to control access to files. One can use the concept of file locking to get a better understanding of Semaphores. Semaphores aren&#039;t usually held together in singles, but rather in groups. This is done by creating a set that can contain several semaphores through the &#039;&#039;semget()&#039;&#039; command. &#039;&#039; semop()&#039;&#039; decides what we the semaphor to accomplish. I.e) depending on whether we have a positive, 0 or negative value, the semaphor shall be added, will wait, or be blocked until positive, respectively. Semaphores were first thought of by Dijkstra and used in computers in the late 60&#039;s&lt;br /&gt;
&lt;br /&gt;
Shared Memory: Functions involving shared memory allow the user to be able to access, detach and combine shared addresses. &#039;&#039;shmget()&#039;&#039; command returns the ID for the shared memory region. It can also create it if it doesn&#039;t already exist. &#039;&#039;shmat()&#039;&#039; function attaches the  shared  memory  to the virtual address of the calling process. &#039;&#039;shmdt()&#039;&#039; reverses the &#039;&#039;shmat()&#039;&#039; command, and detaches shared memory.&lt;br /&gt;
&lt;br /&gt;
Unix and Linux use the same calls for the majority of the functions now, except for a few which are slighly different.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality. &lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4639</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4639"/>
		<updated>2010-10-15T08:27:48Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Communications Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were also part of the first UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address offset. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit address offsets. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; which spawns a new process, and &#039;stty&#039; which sets the mode of typewriter.The &#039;wait&#039; system call is used in both as well the only really difference is that in the Linux version wait store status information in a integer which take the integer itself as an argument, not a pointer to itself. In Linux there are a lot more system calls regarding this type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification. The &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface to getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: These functions all consist of recieving and sending messages from the queue usually involving ID&#039;s. &#039;&#039;msgget()&#039;&#039; acquires a message from the queue identifier relating to the key. Closely related, but not the same the &#039;&#039;msgrcv()&#039;&#039; command is used to recieve a message from the queue relating to the msqid parameter. This parameter involves the ID of where to recieve the message. &#039;&#039;msgsnd()&#039;&#039; sends a message to the queue. This command can be thought of as the reverse of the &#039;&#039;msgget()&#039;&#039;. Lastly, &#039;&#039;msgctl()&#039;&#039; performs message control operations through queries.  &lt;br /&gt;
&lt;br /&gt;
Semaphores: This idea of semaphores consists of setting it or checking it. They are used to control access to files. One can use the concept of file locking to get a better understanding of Semaphores. Semaphores aren&#039;t usually held together in singles, but rather in groups. This is done by creating a set that can contain several semaphores through the &#039;&#039;semget()&#039;&#039; command. &#039;&#039; semop()&#039;&#039; decides what we the semaphor to accomplish. I.e) depending on whether we have a positive, 0 or negative value, the semaphor shall be added, will wait, or be blocked until positive, respectively. Semaphores were first thought of by Dijkstra and used in computers in the late 60&#039;s&lt;br /&gt;
&lt;br /&gt;
Shared Memory: Functions involving shared memory allow the user to be able to access, detach and combine shared addresses. &#039;&#039;shmget()&#039;&#039; command returns the ID for the shared memory region. It can also create it if it doesn&#039;t already exist. &#039;&#039;shmat()&#039;&#039; function attaches the  shared  memory  to the virtual address of the calling process. &#039;&#039;shmdt()&#039;&#039; reverses the &#039;&#039;shmat()&#039;&#039; command, and detaches shared memory.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality. &lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4629</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4629"/>
		<updated>2010-10-15T08:09:05Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Communications Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were also part of the first UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address offset. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit address offsets. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; which spawns a new process, and &#039;stty&#039; which sets the mode of typewriter.The &#039;wait&#039; system call is used in both as well the only really difference is that in the Linux version wait store status information in a integer which take the integer itself as an argument, not a pointer to itself. In Linux there are a lot more system calls regarding this type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification. The &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface to getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: These functions all consist of recieving and sending messages from the queue usually involving ID&#039;s. &#039;&#039;msgget()&#039;&#039; acquires a message from the queue identifier relating to the key. Closely related, but not the same the &#039;&#039;msgrcv()&#039;&#039; command is used to recieve a message from the queue relating to the msqid parameter. This parameter involves the ID of where to recieve the message. &#039;&#039;msgsnd()&#039;&#039; sends a message to the queue. This command can be thought of as the reverse of the &#039;&#039;msgget()&#039;&#039;. Lastly, &#039;&#039;msgctl()&#039;&#039; performs message control operations through queries.  &lt;br /&gt;
&lt;br /&gt;
Semaphores: This idea of semaphores consists of setting it or checking it. They are used to control access to files. One can use the concept of file locking to get a better understanding of Semaphores. Semaphores aren&#039;t usually held together in singles, but rather in groups. This is done by creating a set that can contain several semaphores through the &#039;&#039;semget()&#039;&#039; command. &#039;&#039; semop()&#039;&#039; decides what we the semaphor to accomplish. I.e) depending on whether we have a positive, 0 or negative value, the semaphor shall be added, will wait, or be blocked until positive, respectively. &lt;br /&gt;
&lt;br /&gt;
Shared Memory: Functions involving shared memory allow the user to be able to access, detach and combine shared addresses. &#039;&#039;shmget()&#039;&#039; command returns the ID for the shared memory region. It can also create it if it doesn&#039;t already exist. &#039;&#039;shmat()&#039;&#039; function attaches the  shared  memory  to the virtual address of the calling process. &#039;&#039;shmdt()&#039;&#039; reverses the &#039;&#039;shmat()&#039;&#039; command, and detaches shared memory.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality. &lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4624</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4624"/>
		<updated>2010-10-15T08:01:00Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: /* Communications Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were also part of the first UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address offset. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit address offsets. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; which spawns a new process, and &#039;stty&#039; which sets the mode of typewriter.The &#039;wait&#039; system call is used in both as well the only really difference is that in the Linux version wait store status information in a integer which take the integer itself as an argument, not a pointer to itself. In Linux there are a lot more system calls regarding this type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification. The &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface to getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: These functions all consist of recieving and sending messages from the queue usually involving ID&#039;s. &#039;&#039;msgget()&#039;&#039; acquires a message from the queue identifier relating to the key. Closely related, but not the same the &#039;&#039;msgrcv()&#039;&#039; command is used to recieve a message from the queue relating to the msqid parameter. This parameter involves the ID of where to recieve the message. &#039;&#039;msgsnd()&#039;&#039; sends a message to the queue. This command can be thought of as the reverse of the &#039;&#039;msgget()&#039;&#039;. Lastly, &#039;&#039;msgctl()&#039;&#039; performs message control operations through queries.  &lt;br /&gt;
&lt;br /&gt;
Semaphores: This idea of semaphores consists of setting it or checking it. They are used to control access to files. One can use the concept of file locking to get a better understanding of Semaphores. Semaphores aren&#039;t usually held together in singles, but rather in groups. This is done by creating a set that can contain several semaphores through the &#039;&#039;semget()&#039;&#039; command. &#039;&#039; semop()&#039;&#039; decides what we the semaphor to accomplish. I.e) depending on whether we have a positive, 0 or negative value, the semaphor shall be added, will wait, or be blocked until positive, respectively. &lt;br /&gt;
&lt;br /&gt;
Shared Memory: &#039;&#039;shmget()&#039;&#039;, &#039;&#039;shmat()&#039;&#039;,&#039;&#039;shmdt()&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality. &lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4618</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4618"/>
		<updated>2010-10-15T07:50:12Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were also part of the first UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address offset. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit address offsets. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; which spawns a new process, and &#039;stty&#039; which sets the mode of typewriter.The &#039;wait&#039; system call is used in both as well the only really difference is that in the Linux version wait store status information in a integer which take the integer itself as an argument, not a pointer to itself. In Linux there are a lot more system calls regarding this type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification. The &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface to getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: These functions all consist of recieving and sending messages from the queue usually involving ID&#039;s. &#039;&#039;msgget()&#039;&#039; acquires a message from the queue identifier relating to the key. Closely related, but not the same the &#039;&#039;msgrcv()&#039;&#039; command is used to recieve a message from the queue relating to the msqid parameter. This parameter involves the ID of where to recieve the message. &#039;&#039;msgsnd()&#039;&#039; sends a message to the queue. This command can be thought of as the reverse of the &#039;&#039;msgget()&#039;&#039;. Lastly, &#039;&#039;msgctl()&#039;&#039; performs message control operations through queries.  &lt;br /&gt;
&lt;br /&gt;
Semaphores: &#039;&#039;semget()&#039;&#039;,&#039;&#039; semop()&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Shared Memory: &#039;&#039;shmget()&#039;&#039;, &#039;&#039;shmat()&#039;&#039;,&#039;&#039;shmdt()&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality. &lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4601</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4601"/>
		<updated>2010-10-15T07:27:00Z</updated>

		<summary type="html">&lt;p&gt;Rarteaga: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were also part of the first UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address offset. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit address offsets. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; spawns a new process and &#039;stty&#039; sets the mode of typewriter. In Linux there are a lot more system calls regarding the third sub type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification , &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface for getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: &#039;&#039;msgget()&#039;&#039;,&#039;&#039;msgsnd()&#039;&#039;,&#039;&#039;msgrcv(),&#039;&#039;msgctl()&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Semaphores: &#039;&#039;semget()&#039;&#039;,&#039;&#039; semop()&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Shared Memory: &#039;&#039;shmget()&#039;&#039;, &#039;&#039;shmat()&#039;&#039;,&#039;&#039;shmdt()&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality. &lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;/div&gt;</summary>
		<author><name>Rarteaga</name></author>
	</entry>
</feed>