<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Csulliva</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Csulliva"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Csulliva"/>
	<updated>2026-04-23T01:56:39Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6557</id>
		<title>COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6557"/>
		<updated>2010-12-02T22:56:04Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Macro optimizations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;&#039;&#039;Go to discussion for group members confirmation, general talk and paper discussions.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Paper=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;big&amp;gt;&amp;lt;big&amp;gt;&#039;&#039;&#039;&amp;quot;The Turtles Project: Design and Implementation of Nested Virtualization&amp;quot;&#039;&#039;&#039;&amp;lt;/big&amp;gt;&amp;lt;/big&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039;&lt;br /&gt;
* Muli Ben-Yehuday +        &lt;br /&gt;
* Michael D. Day ++      &lt;br /&gt;
* Zvi Dubitzky +       &lt;br /&gt;
* Michael Factor +       &lt;br /&gt;
* Nadav Har’El +       &lt;br /&gt;
* Abel Gordon +&lt;br /&gt;
* Anthony Liguori ++&lt;br /&gt;
* Orit Wasserman +&lt;br /&gt;
* Ben-Ami Yassour +&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Research labs:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
+ IBM Research – Haifa&lt;br /&gt;
&lt;br /&gt;
++ IBM Linux Technology Center&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Website:&#039;&#039;&#039; http://www.usenix.org/events/osdi10/tech/full_papers/Ben-Yehuda.pdf&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Video presentation:&#039;&#039;&#039; http://www.usenix.org/multimedia/osdi10ben-yehuda [Note: username and password are required for entry]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
&lt;br /&gt;
Before we delve into the details of our research paper, its essential that we provide some insight and background to the concepts &lt;br /&gt;
and notions discussed by the authors.&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as a virtual machine, this emulation usually consists of a guest hypervisor and a virtualized environment, giving the guest operating system the illusion that its running on the bare hardware. But the reality is, we&#039;re running the virtual machine as an application on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within the context of operating systems.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guests among one another, and with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
The concept of recursively running one virtual machine inside another. For instance, the main operating system (L1) runs a VM called L2. In turn, L2 runs another VM L3; L3 then runs L4 and so on.&lt;br /&gt;
[[File:VirtualizationDiagram-MH.png|thumb|right|Nested virtualization|left|400px]]&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
A virtualization model that requires the guest OS kernel to be modified in order to have some direct access to the host hardware. In contrast to full-virtualization that we discussed in the beginning of the article, para-virtualization does not simulate the entire hardware, it rather relies on a software interface that we must implement in the guest so that it can have some privileged hardware access via special instructions called hypercalls. The advantage here is that we have less environment switches and interaction between the guest and host hypervisors, thus more efficiency. However, portability is an obvious issue, since a system can be para-virtualized to be compatible with only one hypervisor. Another thing to note is that some operating systems such as Windows don&#039;t support para-virtualization.&lt;br /&gt;
&lt;br /&gt;
===Models of virtualization===&lt;br /&gt;
&lt;br /&gt;
=====Trap and emulate model=====&lt;br /&gt;
A model of virtualization based on the idea that when a guest hypervisor attempts to execute higher level instructions like creating its own virtual machine, it triggers a trap or a fault which gets handled or caught by the host hypervisor. Based on the hardware model of virtualization support, the host hypervisor (L0) then determines whether it should handle the trap or whether it should forwards it to the the responsible parent of that guest hypervisor at a higher level.&lt;br /&gt;
&lt;br /&gt;
====Protection rings====&lt;br /&gt;
In modern operating system, there are four levels of access privilge, called Rings, that range from 0 to 4.&lt;br /&gt;
Ring 0 is the most privilged level, allowing access to the bare hardware components. The operating system kernel must &lt;br /&gt;
execute at Ring 0 in order to access the hardware and secure control. User programs execute at Ring 3. Ring 1 and Ring 2 are dedciated to device drivers and other operations.&lt;br /&gt;
&lt;br /&gt;
In virtualization, the host hypervisor executes at Ring 0. While the guest virtual machine executes at Ring 3 because its treated as a running application. This is why when a virtual machine attempts to gain hardware privilges or executes higher level instructions, a trap occurs and the hypervisor comes into play and handles the trap.&lt;br /&gt;
&lt;br /&gt;
====Models of hardware support====&lt;br /&gt;
&lt;br /&gt;
=====Multiple-level architecture=====&lt;br /&gt;
Every parent hypervisor handles every other hypervisor running on top of it. For instance, assume that L0 (host hypervisor) runs the VM L1. When L1 attempts to execute a privileged instruction and a trap occurs, then the parent of L1, which is L0 in this case, will handle the trap. If L1 runs L2, and L2 attempts to execute privileged instructions as well, then L1 will act as the trap handler. More generally, every parent hypervisor at level Ln will act as a trap handler for its guest VM at level Ln+1. This model is not supported by the x86 based systems that are discussed in our research paper.&lt;br /&gt;
&lt;br /&gt;
=====Single-level architecture=====&lt;br /&gt;
The model supported by x86 based systems. In this model, everything must go back to the main host hypervisor at the L0 level. For instance, if the host hypervisor (L0) runs L1, when L1 attempts to run its own virtual machine L2, this will trigger a trap that goes down to L0. Then L0 sends the result of the requested instruction back to L1. Generally, a trap at level Ln will be handled by the host hypervisor at level L0 and then the resulting emulated instruction goes back to Ln.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A user can run an application thats not compatible with the existing or running OS as a virtual machine. Operating systems could also provide the user a compatibily mode of other operating systems or applications, an example of this is the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites to host their API and databases on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
We can also use nested virtualization for security purposes. One common example is virtual honeypots. A honeypot is basically a hollow program or network that appears to be functioning to outside users, but in reality, its only there as a security tool to watch or trap hacker attacks. By using nested virtualization, we can create a honeypot of our system as virtual machines and see how our virtual system is being attacked or what kind of features are being exploited. We can take advantage of the fact that those virtual honeypots can easily be controlled, manipulated, destroyed or even restored.&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Rough version. Let me know of any comments/improvements that can be made on the talk page&#039;&#039;&#039;--[[User:Mbingham|Mbingham]] 19:51, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Nested virtualization has been studied since the mid 1970s (see paper citations 21,22 and 36). Early reasearch in the area assumes that there is hardware support for nested virtualization. Actual implementations of nested virtualization, such as the z/VM hypervisor in the early 1990s, also required architectural support. Other solutions assume the hypervisors and operating systems being virtualized have been modified to be compatabile with nested virtualization. There have also recently been software based solutions (see citation 12), however these solutions suffer from significant performance problems.&lt;br /&gt;
&lt;br /&gt;
The main barrier to having nested virtualization without architectural support is that, as you increase the levels of virtualization, the numer of control switches between different levels of hypervisors increases. A trap in a highly nested virtual machine first goes to the bottom level hypervisor, which can send it up to the second level hypervisor, which can in turn send it up (or back down), until it potentially in the worst case reaches the hypervisor that is one level below the virtual machine itself. The trap instruction can be bounced between different levels of hypervisor, which results in one trap instruction multiplying to many trap instructions. &lt;br /&gt;
&lt;br /&gt;
Generally, solutions that requie architectural support and specialized software for the guest machines are not practically useful because this support does not always exist, such as on x86 processors. Solutions that do not require this suffer from significant performance costs because of how the number of traps expands as nesting depth increases. This paper presents a technique to reconcile the lack of hardware support on available hardware with efficiency. It solves the problem of a single nested trap expanding into many more trap instructions, which allows efficient virtualization without architectural support.&lt;br /&gt;
&lt;br /&gt;
More specifically, virtualization deals with how to share the resources of the computer between multiple guest operating systems. Nested virtualization must share these resources between multiple guest operating systems and guest hypervisors. The authors acknowledge the CPU, memory, and IO devices as the three key resources that they need to share. Combining this, the paper presents a solution to the problem of how to multiplex the CPU, memory, and IO efficiently between multiple virtual operating systems and hypervisors on a system which has no architectural support for nested virtualization.&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The non stop evolution of computers entices intricate designs that are virtualized and harmonious with cloud computing. The paper contributes to this belief by allowing consumers and users to inject machines with &#039;&#039;&#039;their&#039;&#039;&#039; choice of hypervisor/OS combination that provides grounds for security and compatibility. The sophisticated abstractions presented in the paper such as shadow paging and isolation of a single OS resources authorize programmers for further development and ideas which use this infrastructure. For example the paper Accountable Virtual Machines wraps programs around a particular state VM which could most definitely be placed on a separate hypervisor for ideal isolation.&lt;br /&gt;
&lt;br /&gt;
==Theory==&lt;br /&gt;
&lt;br /&gt;
==CPU Virtualization==&lt;br /&gt;
How does the Nest VMX virtualization work for the turtle project: L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle. The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously. -csulliva&lt;br /&gt;
&lt;br /&gt;
==Memory virtualization==&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work on the turtle project. The main idea with n = 2 nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to L1 physical and form a L1 physical to L0 physical address. 3 levels of translations however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical. They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
==I/O virtualization==&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work on the turtle project? There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts. the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
==Macro optimizations==&lt;br /&gt;
How they implement the Micro-Optimizations to make it go faster on the turtle project? The two main places where guest of a nested hypervisor is slower then the same guest running on a baremetal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only. They Optimized the transitions between L1 and L2. This involves an exit to L0 and then an entry. In L0 the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if its being modified and they carefully balance full copying versus partial copying and tracking. The VMCS are optimized further by copying multiple VMCS fields at once. Normally by intel&#039;s specification read or writes must be performed using vmread and vmwrite instruction (operate on a single field). VMCS data can be accessed without ill side-effects by bypassing vmread and vmwrite and copying multiple fields at once with large memory copies. (might not work on processors other than the ones they have tested).The main cause of this slowdown exit handling are additional exits caused by privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single l2 exit). By using AMD SVM the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
=== The good ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The bad ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The style of paper ===&lt;br /&gt;
&lt;br /&gt;
The paper presents an elaborate description of the concept of nested virtualization in a very specific manner. It does a good job to convey the technical details. Depending on the level of enlightenment towards the background knowledge it appears very complex and personally it required quite some research before my fully delving into the theory of the design. For instance the paragraph 4.1.2 &amp;quot;Impact of Multidimensional paging&amp;quot; attempts to illustrate the technique by an example with terms such as ETP and L1. All in all, the provided video highly in depth increased my awareness in the subject of nested hypervisors.&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
Bottom line, the research showed in the paper is the first to achieve efficient x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] Tanenbaum, Andrew (2007).&#039;&#039; Modern Operating Systems (3rd edition)&#039;&#039;, page 569.&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6556</id>
		<title>COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6556"/>
		<updated>2010-12-02T22:55:30Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* I/O virtualization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;&#039;&#039;Go to discussion for group members confirmation, general talk and paper discussions.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Paper=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;big&amp;gt;&amp;lt;big&amp;gt;&#039;&#039;&#039;&amp;quot;The Turtles Project: Design and Implementation of Nested Virtualization&amp;quot;&#039;&#039;&#039;&amp;lt;/big&amp;gt;&amp;lt;/big&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039;&lt;br /&gt;
* Muli Ben-Yehuday +        &lt;br /&gt;
* Michael D. Day ++      &lt;br /&gt;
* Zvi Dubitzky +       &lt;br /&gt;
* Michael Factor +       &lt;br /&gt;
* Nadav Har’El +       &lt;br /&gt;
* Abel Gordon +&lt;br /&gt;
* Anthony Liguori ++&lt;br /&gt;
* Orit Wasserman +&lt;br /&gt;
* Ben-Ami Yassour +&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Research labs:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
+ IBM Research – Haifa&lt;br /&gt;
&lt;br /&gt;
++ IBM Linux Technology Center&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Website:&#039;&#039;&#039; http://www.usenix.org/events/osdi10/tech/full_papers/Ben-Yehuda.pdf&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Video presentation:&#039;&#039;&#039; http://www.usenix.org/multimedia/osdi10ben-yehuda [Note: username and password are required for entry]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
&lt;br /&gt;
Before we delve into the details of our research paper, its essential that we provide some insight and background to the concepts &lt;br /&gt;
and notions discussed by the authors.&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as a virtual machine, this emulation usually consists of a guest hypervisor and a virtualized environment, giving the guest operating system the illusion that its running on the bare hardware. But the reality is, we&#039;re running the virtual machine as an application on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within the context of operating systems.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guests among one another, and with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
The concept of recursively running one virtual machine inside another. For instance, the main operating system (L1) runs a VM called L2. In turn, L2 runs another VM L3; L3 then runs L4 and so on.&lt;br /&gt;
[[File:VirtualizationDiagram-MH.png|thumb|right|Nested virtualization|left|400px]]&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
A virtualization model that requires the guest OS kernel to be modified in order to have some direct access to the host hardware. In contrast to full-virtualization that we discussed in the beginning of the article, para-virtualization does not simulate the entire hardware, it rather relies on a software interface that we must implement in the guest so that it can have some privileged hardware access via special instructions called hypercalls. The advantage here is that we have less environment switches and interaction between the guest and host hypervisors, thus more efficiency. However, portability is an obvious issue, since a system can be para-virtualized to be compatible with only one hypervisor. Another thing to note is that some operating systems such as Windows don&#039;t support para-virtualization.&lt;br /&gt;
&lt;br /&gt;
===Models of virtualization===&lt;br /&gt;
&lt;br /&gt;
=====Trap and emulate model=====&lt;br /&gt;
A model of virtualization based on the idea that when a guest hypervisor attempts to execute higher level instructions like creating its own virtual machine, it triggers a trap or a fault which gets handled or caught by the host hypervisor. Based on the hardware model of virtualization support, the host hypervisor (L0) then determines whether it should handle the trap or whether it should forwards it to the the responsible parent of that guest hypervisor at a higher level.&lt;br /&gt;
&lt;br /&gt;
====Protection rings====&lt;br /&gt;
In modern operating system, there are four levels of access privilge, called Rings, that range from 0 to 4.&lt;br /&gt;
Ring 0 is the most privilged level, allowing access to the bare hardware components. The operating system kernel must &lt;br /&gt;
execute at Ring 0 in order to access the hardware and secure control. User programs execute at Ring 3. Ring 1 and Ring 2 are dedciated to device drivers and other operations.&lt;br /&gt;
&lt;br /&gt;
In virtualization, the host hypervisor executes at Ring 0. While the guest virtual machine executes at Ring 3 because its treated as a running application. This is why when a virtual machine attempts to gain hardware privilges or executes higher level instructions, a trap occurs and the hypervisor comes into play and handles the trap.&lt;br /&gt;
&lt;br /&gt;
====Models of hardware support====&lt;br /&gt;
&lt;br /&gt;
=====Multiple-level architecture=====&lt;br /&gt;
Every parent hypervisor handles every other hypervisor running on top of it. For instance, assume that L0 (host hypervisor) runs the VM L1. When L1 attempts to execute a privileged instruction and a trap occurs, then the parent of L1, which is L0 in this case, will handle the trap. If L1 runs L2, and L2 attempts to execute privileged instructions as well, then L1 will act as the trap handler. More generally, every parent hypervisor at level Ln will act as a trap handler for its guest VM at level Ln+1. This model is not supported by the x86 based systems that are discussed in our research paper.&lt;br /&gt;
&lt;br /&gt;
=====Single-level architecture=====&lt;br /&gt;
The model supported by x86 based systems. In this model, everything must go back to the main host hypervisor at the L0 level. For instance, if the host hypervisor (L0) runs L1, when L1 attempts to run its own virtual machine L2, this will trigger a trap that goes down to L0. Then L0 sends the result of the requested instruction back to L1. Generally, a trap at level Ln will be handled by the host hypervisor at level L0 and then the resulting emulated instruction goes back to Ln.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A user can run an application thats not compatible with the existing or running OS as a virtual machine. Operating systems could also provide the user a compatibily mode of other operating systems or applications, an example of this is the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites to host their API and databases on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
We can also use nested virtualization for security purposes. One common example is virtual honeypots. A honeypot is basically a hollow program or network that appears to be functioning to outside users, but in reality, its only there as a security tool to watch or trap hacker attacks. By using nested virtualization, we can create a honeypot of our system as virtual machines and see how our virtual system is being attacked or what kind of features are being exploited. We can take advantage of the fact that those virtual honeypots can easily be controlled, manipulated, destroyed or even restored.&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Rough version. Let me know of any comments/improvements that can be made on the talk page&#039;&#039;&#039;--[[User:Mbingham|Mbingham]] 19:51, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Nested virtualization has been studied since the mid 1970s (see paper citations 21,22 and 36). Early reasearch in the area assumes that there is hardware support for nested virtualization. Actual implementations of nested virtualization, such as the z/VM hypervisor in the early 1990s, also required architectural support. Other solutions assume the hypervisors and operating systems being virtualized have been modified to be compatabile with nested virtualization. There have also recently been software based solutions (see citation 12), however these solutions suffer from significant performance problems.&lt;br /&gt;
&lt;br /&gt;
The main barrier to having nested virtualization without architectural support is that, as you increase the levels of virtualization, the numer of control switches between different levels of hypervisors increases. A trap in a highly nested virtual machine first goes to the bottom level hypervisor, which can send it up to the second level hypervisor, which can in turn send it up (or back down), until it potentially in the worst case reaches the hypervisor that is one level below the virtual machine itself. The trap instruction can be bounced between different levels of hypervisor, which results in one trap instruction multiplying to many trap instructions. &lt;br /&gt;
&lt;br /&gt;
Generally, solutions that requie architectural support and specialized software for the guest machines are not practically useful because this support does not always exist, such as on x86 processors. Solutions that do not require this suffer from significant performance costs because of how the number of traps expands as nesting depth increases. This paper presents a technique to reconcile the lack of hardware support on available hardware with efficiency. It solves the problem of a single nested trap expanding into many more trap instructions, which allows efficient virtualization without architectural support.&lt;br /&gt;
&lt;br /&gt;
More specifically, virtualization deals with how to share the resources of the computer between multiple guest operating systems. Nested virtualization must share these resources between multiple guest operating systems and guest hypervisors. The authors acknowledge the CPU, memory, and IO devices as the three key resources that they need to share. Combining this, the paper presents a solution to the problem of how to multiplex the CPU, memory, and IO efficiently between multiple virtual operating systems and hypervisors on a system which has no architectural support for nested virtualization.&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The non stop evolution of computers entices intricate designs that are virtualized and harmonious with cloud computing. The paper contributes to this belief by allowing consumers and users to inject machines with &#039;&#039;&#039;their&#039;&#039;&#039; choice of hypervisor/OS combination that provides grounds for security and compatibility. The sophisticated abstractions presented in the paper such as shadow paging and isolation of a single OS resources authorize programmers for further development and ideas which use this infrastructure. For example the paper Accountable Virtual Machines wraps programs around a particular state VM which could most definitely be placed on a separate hypervisor for ideal isolation.&lt;br /&gt;
&lt;br /&gt;
==Theory==&lt;br /&gt;
&lt;br /&gt;
==CPU Virtualization==&lt;br /&gt;
How does the Nest VMX virtualization work for the turtle project: L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle. The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously. -csulliva&lt;br /&gt;
&lt;br /&gt;
==Memory virtualization==&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work on the turtle project. The main idea with n = 2 nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to L1 physical and form a L1 physical to L0 physical address. 3 levels of translations however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical. They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
==I/O virtualization==&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work on the turtle project? There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts. the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
==Macro optimizations==&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
=== The good ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The bad ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The style of paper ===&lt;br /&gt;
&lt;br /&gt;
The paper presents an elaborate description of the concept of nested virtualization in a very specific manner. It does a good job to convey the technical details. Depending on the level of enlightenment towards the background knowledge it appears very complex and personally it required quite some research before my fully delving into the theory of the design. For instance the paragraph 4.1.2 &amp;quot;Impact of Multidimensional paging&amp;quot; attempts to illustrate the technique by an example with terms such as ETP and L1. All in all, the provided video highly in depth increased my awareness in the subject of nested hypervisors.&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
Bottom line, the research showed in the paper is the first to achieve efficient x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] Tanenbaum, Andrew (2007).&#039;&#039; Modern Operating Systems (3rd edition)&#039;&#039;, page 569.&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6554</id>
		<title>COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6554"/>
		<updated>2010-12-02T22:54:54Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Memory virtualization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;&#039;&#039;Go to discussion for group members confirmation, general talk and paper discussions.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Paper=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;big&amp;gt;&amp;lt;big&amp;gt;&#039;&#039;&#039;&amp;quot;The Turtles Project: Design and Implementation of Nested Virtualization&amp;quot;&#039;&#039;&#039;&amp;lt;/big&amp;gt;&amp;lt;/big&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039;&lt;br /&gt;
* Muli Ben-Yehuday +        &lt;br /&gt;
* Michael D. Day ++      &lt;br /&gt;
* Zvi Dubitzky +       &lt;br /&gt;
* Michael Factor +       &lt;br /&gt;
* Nadav Har’El +       &lt;br /&gt;
* Abel Gordon +&lt;br /&gt;
* Anthony Liguori ++&lt;br /&gt;
* Orit Wasserman +&lt;br /&gt;
* Ben-Ami Yassour +&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Research labs:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
+ IBM Research – Haifa&lt;br /&gt;
&lt;br /&gt;
++ IBM Linux Technology Center&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Website:&#039;&#039;&#039; http://www.usenix.org/events/osdi10/tech/full_papers/Ben-Yehuda.pdf&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Video presentation:&#039;&#039;&#039; http://www.usenix.org/multimedia/osdi10ben-yehuda [Note: username and password are required for entry]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
&lt;br /&gt;
Before we delve into the details of our research paper, its essential that we provide some insight and background to the concepts &lt;br /&gt;
and notions discussed by the authors.&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as a virtual machine, this emulation usually consists of a guest hypervisor and a virtualized environment, giving the guest operating system the illusion that its running on the bare hardware. But the reality is, we&#039;re running the virtual machine as an application on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within the context of operating systems.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guests among one another, and with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
The concept of recursively running one virtual machine inside another. For instance, the main operating system (L1) runs a VM called L2. In turn, L2 runs another VM L3; L3 then runs L4 and so on.&lt;br /&gt;
[[File:VirtualizationDiagram-MH.png|thumb|right|Nested virtualization|left|400px]]&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
A virtualization model that requires the guest OS kernel to be modified in order to have some direct access to the host hardware. In contrast to full-virtualization that we discussed in the beginning of the article, para-virtualization does not simulate the entire hardware, it rather relies on a software interface that we must implement in the guest so that it can have some privileged hardware access via special instructions called hypercalls. The advantage here is that we have less environment switches and interaction between the guest and host hypervisors, thus more efficiency. However, portability is an obvious issue, since a system can be para-virtualized to be compatible with only one hypervisor. Another thing to note is that some operating systems such as Windows don&#039;t support para-virtualization.&lt;br /&gt;
&lt;br /&gt;
===Models of virtualization===&lt;br /&gt;
&lt;br /&gt;
=====Trap and emulate model=====&lt;br /&gt;
A model of virtualization based on the idea that when a guest hypervisor attempts to execute higher level instructions like creating its own virtual machine, it triggers a trap or a fault which gets handled or caught by the host hypervisor. Based on the hardware model of virtualization support, the host hypervisor (L0) then determines whether it should handle the trap or whether it should forwards it to the the responsible parent of that guest hypervisor at a higher level.&lt;br /&gt;
&lt;br /&gt;
====Protection rings====&lt;br /&gt;
In modern operating system, there are four levels of access privilge, called Rings, that range from 0 to 4.&lt;br /&gt;
Ring 0 is the most privilged level, allowing access to the bare hardware components. The operating system kernel must &lt;br /&gt;
execute at Ring 0 in order to access the hardware and secure control. User programs execute at Ring 3. Ring 1 and Ring 2 are dedciated to device drivers and other operations.&lt;br /&gt;
&lt;br /&gt;
In virtualization, the host hypervisor executes at Ring 0. While the guest virtual machine executes at Ring 3 because its treated as a running application. This is why when a virtual machine attempts to gain hardware privilges or executes higher level instructions, a trap occurs and the hypervisor comes into play and handles the trap.&lt;br /&gt;
&lt;br /&gt;
====Models of hardware support====&lt;br /&gt;
&lt;br /&gt;
=====Multiple-level architecture=====&lt;br /&gt;
Every parent hypervisor handles every other hypervisor running on top of it. For instance, assume that L0 (host hypervisor) runs the VM L1. When L1 attempts to execute a privileged instruction and a trap occurs, then the parent of L1, which is L0 in this case, will handle the trap. If L1 runs L2, and L2 attempts to execute privileged instructions as well, then L1 will act as the trap handler. More generally, every parent hypervisor at level Ln will act as a trap handler for its guest VM at level Ln+1. This model is not supported by the x86 based systems that are discussed in our research paper.&lt;br /&gt;
&lt;br /&gt;
=====Single-level architecture=====&lt;br /&gt;
The model supported by x86 based systems. In this model, everything must go back to the main host hypervisor at the L0 level. For instance, if the host hypervisor (L0) runs L1, when L1 attempts to run its own virtual machine L2, this will trigger a trap that goes down to L0. Then L0 sends the result of the requested instruction back to L1. Generally, a trap at level Ln will be handled by the host hypervisor at level L0 and then the resulting emulated instruction goes back to Ln.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A user can run an application thats not compatible with the existing or running OS as a virtual machine. Operating systems could also provide the user a compatibily mode of other operating systems or applications, an example of this is the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites to host their API and databases on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
We can also use nested virtualization for security purposes. One common example is virtual honeypots. A honeypot is basically a hollow program or network that appears to be functioning to outside users, but in reality, its only there as a security tool to watch or trap hacker attacks. By using nested virtualization, we can create a honeypot of our system as virtual machines and see how our virtual system is being attacked or what kind of features are being exploited. We can take advantage of the fact that those virtual honeypots can easily be controlled, manipulated, destroyed or even restored.&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Rough version. Let me know of any comments/improvements that can be made on the talk page&#039;&#039;&#039;--[[User:Mbingham|Mbingham]] 19:51, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Nested virtualization has been studied since the mid 1970s (see paper citations 21,22 and 36). Early reasearch in the area assumes that there is hardware support for nested virtualization. Actual implementations of nested virtualization, such as the z/VM hypervisor in the early 1990s, also required architectural support. Other solutions assume the hypervisors and operating systems being virtualized have been modified to be compatabile with nested virtualization. There have also recently been software based solutions (see citation 12), however these solutions suffer from significant performance problems.&lt;br /&gt;
&lt;br /&gt;
The main barrier to having nested virtualization without architectural support is that, as you increase the levels of virtualization, the numer of control switches between different levels of hypervisors increases. A trap in a highly nested virtual machine first goes to the bottom level hypervisor, which can send it up to the second level hypervisor, which can in turn send it up (or back down), until it potentially in the worst case reaches the hypervisor that is one level below the virtual machine itself. The trap instruction can be bounced between different levels of hypervisor, which results in one trap instruction multiplying to many trap instructions. &lt;br /&gt;
&lt;br /&gt;
Generally, solutions that requie architectural support and specialized software for the guest machines are not practically useful because this support does not always exist, such as on x86 processors. Solutions that do not require this suffer from significant performance costs because of how the number of traps expands as nesting depth increases. This paper presents a technique to reconcile the lack of hardware support on available hardware with efficiency. It solves the problem of a single nested trap expanding into many more trap instructions, which allows efficient virtualization without architectural support.&lt;br /&gt;
&lt;br /&gt;
More specifically, virtualization deals with how to share the resources of the computer between multiple guest operating systems. Nested virtualization must share these resources between multiple guest operating systems and guest hypervisors. The authors acknowledge the CPU, memory, and IO devices as the three key resources that they need to share. Combining this, the paper presents a solution to the problem of how to multiplex the CPU, memory, and IO efficiently between multiple virtual operating systems and hypervisors on a system which has no architectural support for nested virtualization.&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The non stop evolution of computers entices intricate designs that are virtualized and harmonious with cloud computing. The paper contributes to this belief by allowing consumers and users to inject machines with &#039;&#039;&#039;their&#039;&#039;&#039; choice of hypervisor/OS combination that provides grounds for security and compatibility. The sophisticated abstractions presented in the paper such as shadow paging and isolation of a single OS resources authorize programmers for further development and ideas which use this infrastructure. For example the paper Accountable Virtual Machines wraps programs around a particular state VM which could most definitely be placed on a separate hypervisor for ideal isolation.&lt;br /&gt;
&lt;br /&gt;
==Theory==&lt;br /&gt;
&lt;br /&gt;
==CPU Virtualization==&lt;br /&gt;
How does the Nest VMX virtualization work for the turtle project: L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle. The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously. -csulliva&lt;br /&gt;
&lt;br /&gt;
==Memory virtualization==&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work on the turtle project. The main idea with n = 2 nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to L1 physical and form a L1 physical to L0 physical address. 3 levels of translations however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical. They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
==I/O virtualization==&lt;br /&gt;
==Macro optimizations==&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
=== The good ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The bad ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The style of paper ===&lt;br /&gt;
&lt;br /&gt;
The paper presents an elaborate description of the concept of nested virtualization in a very specific manner. It does a good job to convey the technical details. Depending on the level of enlightenment towards the background knowledge it appears very complex and personally it required quite some research before my fully delving into the theory of the design. For instance the paragraph 4.1.2 &amp;quot;Impact of Multidimensional paging&amp;quot; attempts to illustrate the technique by an example with terms such as ETP and L1. All in all, the provided video highly in depth increased my awareness in the subject of nested hypervisors.&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
Bottom line, the research showed in the paper is the first to achieve efficient x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] Tanenbaum, Andrew (2007).&#039;&#039; Modern Operating Systems (3rd edition)&#039;&#039;, page 569.&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6553</id>
		<title>COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6553"/>
		<updated>2010-12-02T22:54:38Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Memory virtualization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;&#039;&#039;Go to discussion for group members confirmation, general talk and paper discussions.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Paper=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;big&amp;gt;&amp;lt;big&amp;gt;&#039;&#039;&#039;&amp;quot;The Turtles Project: Design and Implementation of Nested Virtualization&amp;quot;&#039;&#039;&#039;&amp;lt;/big&amp;gt;&amp;lt;/big&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039;&lt;br /&gt;
* Muli Ben-Yehuday +        &lt;br /&gt;
* Michael D. Day ++      &lt;br /&gt;
* Zvi Dubitzky +       &lt;br /&gt;
* Michael Factor +       &lt;br /&gt;
* Nadav Har’El +       &lt;br /&gt;
* Abel Gordon +&lt;br /&gt;
* Anthony Liguori ++&lt;br /&gt;
* Orit Wasserman +&lt;br /&gt;
* Ben-Ami Yassour +&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Research labs:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
+ IBM Research – Haifa&lt;br /&gt;
&lt;br /&gt;
++ IBM Linux Technology Center&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Website:&#039;&#039;&#039; http://www.usenix.org/events/osdi10/tech/full_papers/Ben-Yehuda.pdf&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Video presentation:&#039;&#039;&#039; http://www.usenix.org/multimedia/osdi10ben-yehuda [Note: username and password are required for entry]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
&lt;br /&gt;
Before we delve into the details of our research paper, its essential that we provide some insight and background to the concepts &lt;br /&gt;
and notions discussed by the authors.&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as a virtual machine, this emulation usually consists of a guest hypervisor and a virtualized environment, giving the guest operating system the illusion that its running on the bare hardware. But the reality is, we&#039;re running the virtual machine as an application on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within the context of operating systems.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guests among one another, and with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
The concept of recursively running one virtual machine inside another. For instance, the main operating system (L1) runs a VM called L2. In turn, L2 runs another VM L3; L3 then runs L4 and so on.&lt;br /&gt;
[[File:VirtualizationDiagram-MH.png|thumb|right|Nested virtualization|left|400px]]&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
A virtualization model that requires the guest OS kernel to be modified in order to have some direct access to the host hardware. In contrast to full-virtualization that we discussed in the beginning of the article, para-virtualization does not simulate the entire hardware, it rather relies on a software interface that we must implement in the guest so that it can have some privileged hardware access via special instructions called hypercalls. The advantage here is that we have less environment switches and interaction between the guest and host hypervisors, thus more efficiency. However, portability is an obvious issue, since a system can be para-virtualized to be compatible with only one hypervisor. Another thing to note is that some operating systems such as Windows don&#039;t support para-virtualization.&lt;br /&gt;
&lt;br /&gt;
===Models of virtualization===&lt;br /&gt;
&lt;br /&gt;
=====Trap and emulate model=====&lt;br /&gt;
A model of virtualization based on the idea that when a guest hypervisor attempts to execute higher level instructions like creating its own virtual machine, it triggers a trap or a fault which gets handled or caught by the host hypervisor. Based on the hardware model of virtualization support, the host hypervisor (L0) then determines whether it should handle the trap or whether it should forwards it to the the responsible parent of that guest hypervisor at a higher level.&lt;br /&gt;
&lt;br /&gt;
====Protection rings====&lt;br /&gt;
In modern operating system, there are four levels of access privilge, called Rings, that range from 0 to 4.&lt;br /&gt;
Ring 0 is the most privilged level, allowing access to the bare hardware components. The operating system kernel must &lt;br /&gt;
execute at Ring 0 in order to access the hardware and secure control. User programs execute at Ring 3. Ring 1 and Ring 2 are dedciated to device drivers and other operations.&lt;br /&gt;
&lt;br /&gt;
In virtualization, the host hypervisor executes at Ring 0. While the guest virtual machine executes at Ring 3 because its treated as a running application. This is why when a virtual machine attempts to gain hardware privilges or executes higher level instructions, a trap occurs and the hypervisor comes into play and handles the trap.&lt;br /&gt;
&lt;br /&gt;
====Models of hardware support====&lt;br /&gt;
&lt;br /&gt;
=====Multiple-level architecture=====&lt;br /&gt;
Every parent hypervisor handles every other hypervisor running on top of it. For instance, assume that L0 (host hypervisor) runs the VM L1. When L1 attempts to execute a privileged instruction and a trap occurs, then the parent of L1, which is L0 in this case, will handle the trap. If L1 runs L2, and L2 attempts to execute privileged instructions as well, then L1 will act as the trap handler. More generally, every parent hypervisor at level Ln will act as a trap handler for its guest VM at level Ln+1. This model is not supported by the x86 based systems that are discussed in our research paper.&lt;br /&gt;
&lt;br /&gt;
=====Single-level architecture=====&lt;br /&gt;
The model supported by x86 based systems. In this model, everything must go back to the main host hypervisor at the L0 level. For instance, if the host hypervisor (L0) runs L1, when L1 attempts to run its own virtual machine L2, this will trigger a trap that goes down to L0. Then L0 sends the result of the requested instruction back to L1. Generally, a trap at level Ln will be handled by the host hypervisor at level L0 and then the resulting emulated instruction goes back to Ln.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A user can run an application thats not compatible with the existing or running OS as a virtual machine. Operating systems could also provide the user a compatibily mode of other operating systems or applications, an example of this is the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites to host their API and databases on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
We can also use nested virtualization for security purposes. One common example is virtual honeypots. A honeypot is basically a hollow program or network that appears to be functioning to outside users, but in reality, its only there as a security tool to watch or trap hacker attacks. By using nested virtualization, we can create a honeypot of our system as virtual machines and see how our virtual system is being attacked or what kind of features are being exploited. We can take advantage of the fact that those virtual honeypots can easily be controlled, manipulated, destroyed or even restored.&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Rough version. Let me know of any comments/improvements that can be made on the talk page&#039;&#039;&#039;--[[User:Mbingham|Mbingham]] 19:51, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Nested virtualization has been studied since the mid 1970s (see paper citations 21,22 and 36). Early reasearch in the area assumes that there is hardware support for nested virtualization. Actual implementations of nested virtualization, such as the z/VM hypervisor in the early 1990s, also required architectural support. Other solutions assume the hypervisors and operating systems being virtualized have been modified to be compatabile with nested virtualization. There have also recently been software based solutions (see citation 12), however these solutions suffer from significant performance problems.&lt;br /&gt;
&lt;br /&gt;
The main barrier to having nested virtualization without architectural support is that, as you increase the levels of virtualization, the numer of control switches between different levels of hypervisors increases. A trap in a highly nested virtual machine first goes to the bottom level hypervisor, which can send it up to the second level hypervisor, which can in turn send it up (or back down), until it potentially in the worst case reaches the hypervisor that is one level below the virtual machine itself. The trap instruction can be bounced between different levels of hypervisor, which results in one trap instruction multiplying to many trap instructions. &lt;br /&gt;
&lt;br /&gt;
Generally, solutions that requie architectural support and specialized software for the guest machines are not practically useful because this support does not always exist, such as on x86 processors. Solutions that do not require this suffer from significant performance costs because of how the number of traps expands as nesting depth increases. This paper presents a technique to reconcile the lack of hardware support on available hardware with efficiency. It solves the problem of a single nested trap expanding into many more trap instructions, which allows efficient virtualization without architectural support.&lt;br /&gt;
&lt;br /&gt;
More specifically, virtualization deals with how to share the resources of the computer between multiple guest operating systems. Nested virtualization must share these resources between multiple guest operating systems and guest hypervisors. The authors acknowledge the CPU, memory, and IO devices as the three key resources that they need to share. Combining this, the paper presents a solution to the problem of how to multiplex the CPU, memory, and IO efficiently between multiple virtual operating systems and hypervisors on a system which has no architectural support for nested virtualization.&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The non stop evolution of computers entices intricate designs that are virtualized and harmonious with cloud computing. The paper contributes to this belief by allowing consumers and users to inject machines with &#039;&#039;&#039;their&#039;&#039;&#039; choice of hypervisor/OS combination that provides grounds for security and compatibility. The sophisticated abstractions presented in the paper such as shadow paging and isolation of a single OS resources authorize programmers for further development and ideas which use this infrastructure. For example the paper Accountable Virtual Machines wraps programs around a particular state VM which could most definitely be placed on a separate hypervisor for ideal isolation.&lt;br /&gt;
&lt;br /&gt;
==Theory==&lt;br /&gt;
&lt;br /&gt;
==CPU Virtualization==&lt;br /&gt;
How does the Nest VMX virtualization work for the turtle project: L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle. The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously. -csulliva&lt;br /&gt;
&lt;br /&gt;
==Memory virtualization==&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work on the turtel project. The main idea with n = 2 nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to L1 physical and form a L1 physical to L0 physical address. 3 levels of translations however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical. They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
==I/O virtualization==&lt;br /&gt;
==Macro optimizations==&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
=== The good ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The bad ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The style of paper ===&lt;br /&gt;
&lt;br /&gt;
The paper presents an elaborate description of the concept of nested virtualization in a very specific manner. It does a good job to convey the technical details. Depending on the level of enlightenment towards the background knowledge it appears very complex and personally it required quite some research before my fully delving into the theory of the design. For instance the paragraph 4.1.2 &amp;quot;Impact of Multidimensional paging&amp;quot; attempts to illustrate the technique by an example with terms such as ETP and L1. All in all, the provided video highly in depth increased my awareness in the subject of nested hypervisors.&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
Bottom line, the research showed in the paper is the first to achieve efficient x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] Tanenbaum, Andrew (2007).&#039;&#039; Modern Operating Systems (3rd edition)&#039;&#039;, page 569.&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6552</id>
		<title>COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6552"/>
		<updated>2010-12-02T22:53:41Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* CPU Virtualization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;&#039;&#039;Go to discussion for group members confirmation, general talk and paper discussions.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Paper=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;big&amp;gt;&amp;lt;big&amp;gt;&#039;&#039;&#039;&amp;quot;The Turtles Project: Design and Implementation of Nested Virtualization&amp;quot;&#039;&#039;&#039;&amp;lt;/big&amp;gt;&amp;lt;/big&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039;&lt;br /&gt;
* Muli Ben-Yehuday +        &lt;br /&gt;
* Michael D. Day ++      &lt;br /&gt;
* Zvi Dubitzky +       &lt;br /&gt;
* Michael Factor +       &lt;br /&gt;
* Nadav Har’El +       &lt;br /&gt;
* Abel Gordon +&lt;br /&gt;
* Anthony Liguori ++&lt;br /&gt;
* Orit Wasserman +&lt;br /&gt;
* Ben-Ami Yassour +&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Research labs:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
+ IBM Research – Haifa&lt;br /&gt;
&lt;br /&gt;
++ IBM Linux Technology Center&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Website:&#039;&#039;&#039; http://www.usenix.org/events/osdi10/tech/full_papers/Ben-Yehuda.pdf&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Video presentation:&#039;&#039;&#039; http://www.usenix.org/multimedia/osdi10ben-yehuda [Note: username and password are required for entry]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
&lt;br /&gt;
Before we delve into the details of our research paper, its essential that we provide some insight and background to the concepts &lt;br /&gt;
and notions discussed by the authors.&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as a virtual machine, this emulation usually consists of a guest hypervisor and a virtualized environment, giving the guest operating system the illusion that its running on the bare hardware. But the reality is, we&#039;re running the virtual machine as an application on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within the context of operating systems.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guests among one another, and with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
The concept of recursively running one virtual machine inside another. For instance, the main operating system (L1) runs a VM called L2. In turn, L2 runs another VM L3; L3 then runs L4 and so on.&lt;br /&gt;
[[File:VirtualizationDiagram-MH.png|thumb|right|Nested virtualization|left|400px]]&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
A virtualization model that requires the guest OS kernel to be modified in order to have some direct access to the host hardware. In contrast to full-virtualization that we discussed in the beginning of the article, para-virtualization does not simulate the entire hardware, it rather relies on a software interface that we must implement in the guest so that it can have some privileged hardware access via special instructions called hypercalls. The advantage here is that we have less environment switches and interaction between the guest and host hypervisors, thus more efficiency. However, portability is an obvious issue, since a system can be para-virtualized to be compatible with only one hypervisor. Another thing to note is that some operating systems such as Windows don&#039;t support para-virtualization.&lt;br /&gt;
&lt;br /&gt;
===Models of virtualization===&lt;br /&gt;
&lt;br /&gt;
=====Trap and emulate model=====&lt;br /&gt;
A model of virtualization based on the idea that when a guest hypervisor attempts to execute higher level instructions like creating its own virtual machine, it triggers a trap or a fault which gets handled or caught by the host hypervisor. Based on the hardware model of virtualization support, the host hypervisor (L0) then determines whether it should handle the trap or whether it should forwards it to the the responsible parent of that guest hypervisor at a higher level.&lt;br /&gt;
&lt;br /&gt;
====Protection rings====&lt;br /&gt;
In modern operating system, there are four levels of access privilge, called Rings, that range from 0 to 4.&lt;br /&gt;
Ring 0 is the most privilged level, allowing access to the bare hardware components. The operating system kernel must &lt;br /&gt;
execute at Ring 0 in order to access the hardware and secure control. User programs execute at Ring 3. Ring 1 and Ring 2 are dedciated to device drivers and other operations.&lt;br /&gt;
&lt;br /&gt;
In virtualization, the host hypervisor executes at Ring 0. While the guest virtual machine executes at Ring 3 because its treated as a running application. This is why when a virtual machine attempts to gain hardware privilges or executes higher level instructions, a trap occurs and the hypervisor comes into play and handles the trap.&lt;br /&gt;
&lt;br /&gt;
====Models of hardware support====&lt;br /&gt;
&lt;br /&gt;
=====Multiple-level architecture=====&lt;br /&gt;
Every parent hypervisor handles every other hypervisor running on top of it. For instance, assume that L0 (host hypervisor) runs the VM L1. When L1 attempts to execute a privileged instruction and a trap occurs, then the parent of L1, which is L0 in this case, will handle the trap. If L1 runs L2, and L2 attempts to execute privileged instructions as well, then L1 will act as the trap handler. More generally, every parent hypervisor at level Ln will act as a trap handler for its guest VM at level Ln+1. This model is not supported by the x86 based systems that are discussed in our research paper.&lt;br /&gt;
&lt;br /&gt;
=====Single-level architecture=====&lt;br /&gt;
The model supported by x86 based systems. In this model, everything must go back to the main host hypervisor at the L0 level. For instance, if the host hypervisor (L0) runs L1, when L1 attempts to run its own virtual machine L2, this will trigger a trap that goes down to L0. Then L0 sends the result of the requested instruction back to L1. Generally, a trap at level Ln will be handled by the host hypervisor at level L0 and then the resulting emulated instruction goes back to Ln.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A user can run an application thats not compatible with the existing or running OS as a virtual machine. Operating systems could also provide the user a compatibily mode of other operating systems or applications, an example of this is the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites to host their API and databases on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
We can also use nested virtualization for security purposes. One common example is virtual honeypots. A honeypot is basically a hollow program or network that appears to be functioning to outside users, but in reality, its only there as a security tool to watch or trap hacker attacks. By using nested virtualization, we can create a honeypot of our system as virtual machines and see how our virtual system is being attacked or what kind of features are being exploited. We can take advantage of the fact that those virtual honeypots can easily be controlled, manipulated, destroyed or even restored.&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Rough version. Let me know of any comments/improvements that can be made on the talk page&#039;&#039;&#039;--[[User:Mbingham|Mbingham]] 19:51, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Nested virtualization has been studied since the mid 1970s (see paper citations 21,22 and 36). Early reasearch in the area assumes that there is hardware support for nested virtualization. Actual implementations of nested virtualization, such as the z/VM hypervisor in the early 1990s, also required architectural support. Other solutions assume the hypervisors and operating systems being virtualized have been modified to be compatabile with nested virtualization. There have also recently been software based solutions (see citation 12), however these solutions suffer from significant performance problems.&lt;br /&gt;
&lt;br /&gt;
The main barrier to having nested virtualization without architectural support is that, as you increase the levels of virtualization, the numer of control switches between different levels of hypervisors increases. A trap in a highly nested virtual machine first goes to the bottom level hypervisor, which can send it up to the second level hypervisor, which can in turn send it up (or back down), until it potentially in the worst case reaches the hypervisor that is one level below the virtual machine itself. The trap instruction can be bounced between different levels of hypervisor, which results in one trap instruction multiplying to many trap instructions. &lt;br /&gt;
&lt;br /&gt;
Generally, solutions that requie architectural support and specialized software for the guest machines are not practically useful because this support does not always exist, such as on x86 processors. Solutions that do not require this suffer from significant performance costs because of how the number of traps expands as nesting depth increases. This paper presents a technique to reconcile the lack of hardware support on available hardware with efficiency. It solves the problem of a single nested trap expanding into many more trap instructions, which allows efficient virtualization without architectural support.&lt;br /&gt;
&lt;br /&gt;
More specifically, virtualization deals with how to share the resources of the computer between multiple guest operating systems. Nested virtualization must share these resources between multiple guest operating systems and guest hypervisors. The authors acknowledge the CPU, memory, and IO devices as the three key resources that they need to share. Combining this, the paper presents a solution to the problem of how to multiplex the CPU, memory, and IO efficiently between multiple virtual operating systems and hypervisors on a system which has no architectural support for nested virtualization.&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
What are the research contribution(s) of this work? Specifically, what are the key research results, and what do they mean? (What was implemented? Why is it any better than what came before?)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The non stop evolution of computers entices intricate designs that are virtualized and harmonious with cloud computing. The paper contributes to this belief by allowing consumers and users to inject machines with &#039;&#039;&#039;their&#039;&#039;&#039; choice of hypervisor/OS combination that provides grounds for security and compatibility. The sophisticated abstractions presented in the paper such as shadow paging and isolation of a single OS resources authorize programmers for further development and ideas which use this infrastructure. For example the paper Accountable Virtual Machines wraps programs around a particular state VM which could most definitely be placed on a separate hypervisor for ideal isolation.&lt;br /&gt;
&lt;br /&gt;
==Theory==&lt;br /&gt;
&lt;br /&gt;
==CPU Virtualization==&lt;br /&gt;
How does the Nest VMX virtualization work for the turtle project: L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle. The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously. -csulliva&lt;br /&gt;
&lt;br /&gt;
==Memory virtualization==&lt;br /&gt;
==I/O virtualization==&lt;br /&gt;
==Macro optimizations==&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
=== The good ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The bad ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The style of paper ===&lt;br /&gt;
&lt;br /&gt;
The paper presents an elaborate description of the concept of nested virtualization in a very specific manner. It does a good job to convey the technical details. Depending on the level of enlightenment towards the background knowledge it appears very complex and personally it required quite some research before my fully delving into the theory of the design. For instance the paragraph 4.1.2 &amp;quot;Impact of Multidimensional paging&amp;quot; attempts to illustrate the technique by an example with terms such as ETP and L1. All in all, the provided video highly in depth increased my awareness in the subject of nested hypervisors.&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
Bottom line, the research showed in the paper is the first to achieve efficient x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] Tanenbaum, Andrew (2007).&#039;&#039; Modern Operating Systems (3rd edition)&#039;&#039;, page 569.&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6113</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6113"/>
		<updated>2010-12-02T03:11:29Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other members should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
Just posted other bits that were missing in the backgrounds concepts section like the security uses, models of virtualization and para-virtualization. They&#039;re just a rough version however. I will edit them in the next few hours.I just need to write something for protection rings and that would be it I guess.&lt;br /&gt;
&lt;br /&gt;
I can help with the other sections for the rest of the day, I will try to post some summaries for performance and implementation or even the related work. --[[User:Hesperus|Hesperus]] 07:26, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Guys, we need to get moving here.. The contribution section still needs a lot. We need to talk about their innovations and the things they did there:&lt;br /&gt;
CPU virtualization, Memory virtualization, I/O virtualization and the Macro-optimizations.&lt;br /&gt;
&lt;br /&gt;
I will be posting something regarding this in the next few hours. --[[User:Hesperus|Hesperus]] 22:53, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
I have looked over the paper again and I am wondering about some things.  How are we to critique it?  By their methods, or by the paper itself?&lt;br /&gt;
I find that in the organization of the paper, they give you the links and extra information to look more in depth on such things like the VMC technology, but they almost use that as an excuse for not explaining things in the paper.&lt;br /&gt;
The VMC(0 -&amp;gt;1) annotation that isn&#039;t explained.  I understand what they mean, but it seems that they assume that you already know some things. --JSlonosky 03:03, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work:&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but  they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts.  the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to  bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How they implement the Micro-Optimizations to make it go faster:&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a baremetal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only. They Optimized the transitions between L1 and L2. This  involves an exit to L0 and then an entry. In L0 the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if its being modified and they carefully balance full copying versus partial copying and tracking. The VMCS are optimized further by copying multiple VMCS fields at once. Normally by intel&#039;s specification read or writes must be performed using vmread and vmwrite instruction (operate on a single field). VMCS data can be accessed  without ill side-effects by bypassing&lt;br /&gt;
vmread and vmwrite and copying multiple fields at once with large memory copies. (might not work on processors other than the ones they have tested).The main cause of this slowdown exit handling are additional exits caused by&lt;br /&gt;
privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single l2 exit). By using AMD SVM the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6089</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6089"/>
		<updated>2010-12-02T02:16:15Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other members should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
Just posted other bits that were missing in the backgrounds concepts section like the security uses, models of virtualization and para-virtualization. They&#039;re just a rough version however. I will edit them in the next few hours.I just need to write something for protection rings and that would be it I guess.&lt;br /&gt;
&lt;br /&gt;
I can help with the other sections for the rest of the day, I will try to post some summaries for performance and implementation or even the related work. --[[User:Hesperus|Hesperus]] 07:26, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Guys, we need to get moving here.. The contribution section still needs a lot. We need to talk about their innovations and the things they did there:&lt;br /&gt;
CPU virtualization, Memory virtualization, I/O virtualization and the Macro-optimizations.&lt;br /&gt;
&lt;br /&gt;
I will be posting something regarding this in the next few hours. --[[User:Hesperus|Hesperus]] 22:53, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work:&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but  they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts.  the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to  bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How they implement the Micro-Optimizations to make it go faster:&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a baremetal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only.&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6032</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6032"/>
		<updated>2010-12-02T00:34:26Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other members should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
Just posted other bits that were missing in the backgrounds concepts section like the security uses, models of virtualization and para-virtualization. They&#039;re just a rough version however. I will edit them in the next few hours.I just need to write something for protection rings and that would be it I guess.&lt;br /&gt;
&lt;br /&gt;
I can help with the other sections for the rest of the day, I will try to post some summaries for performance and implementation or even the related work. --[[User:Hesperus|Hesperus]] 07:26, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Guys, we need to get moving here.. The contribution section still needs a lot. We need to talk about their innovations and the things they did there:&lt;br /&gt;
CPU virtualization, Memory virtualization, I/O virtualization and the Macro-optimizations.&lt;br /&gt;
&lt;br /&gt;
I will be posting something regarding this in the next few hours. --[[User:Hesperus|Hesperus]] 22:53, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work:&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but  they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts.  the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to  bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6010</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6010"/>
		<updated>2010-12-01T23:58:44Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other members should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
Just posted other bits that were missing in the backgrounds concepts section like the security uses, models of virtualization and para-virtualization. They&#039;re just a rough version however. I will edit them in the next few hours.I just need to write something for protection rings and that would be it I guess.&lt;br /&gt;
&lt;br /&gt;
I can help with the other sections for the rest of the day, I will try to post some summaries for performance and implementation or even the related work. --[[User:Hesperus|Hesperus]] 07:26, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Guys, we need to get moving here.. The contribution section still needs a lot. We need to talk about their innovations and the things they did there:&lt;br /&gt;
CPU virtualization, Memory virtualization, I/O virtualization and the Macro-optimizations.&lt;br /&gt;
&lt;br /&gt;
I will be posting something regarding this in the next few hours. --[[User:Hesperus|Hesperus]] 22:53, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work:&lt;br /&gt;
there are 3 fundamental way to virtual machine access the I/O&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5890</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5890"/>
		<updated>2010-12-01T02:44:17Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other member should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table and shadow-on-EPT&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5879</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5879"/>
		<updated>2010-12-01T02:24:24Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other member should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU pages table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the tow tables.&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5645</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5645"/>
		<updated>2010-11-28T00:28:36Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem:&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware. But apparently they were able to do it.&lt;br /&gt;
In addition, generally, nested virtualization is not supported on x86 systems (the architecture is not designed with that in mind) but for example vista runs XP VM under the covers when running xp programs which shows ability for parallel virtualization on a single hypervisor.&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5640</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5640"/>
		<updated>2010-11-27T23:02:06Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem:&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware. But apparently they were able to do it.&lt;br /&gt;
In addition, generally, nested virtualization is not supported on x86 systems (the architecture is not designed with that in mind) but for example vista runs XP VM under the covers when running xp programs which shows ability for parallel virtualization on a single hypervisor.&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization works: L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
......&lt;br /&gt;
in the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5639</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5639"/>
		<updated>2010-11-27T23:00:02Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem:&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware. But apparently they were able to do it.&lt;br /&gt;
In addition, generally, nested virtualization is not supported on x86 systems (the architecture is not designed with that in mind) but for example vista runs XP VM under the covers when running xp programs which shows ability for parallel virtualization on a single hypervisor.&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization works: L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machine to handle.&lt;br /&gt;
......&lt;br /&gt;
in the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5621</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5621"/>
		<updated>2010-11-27T03:24:02Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem:&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware. But apparently they were able to do it.&lt;br /&gt;
In addition, generally, nested virtualization is not supported on x86 systems (the architecture is not designed with that in mind) but for example vista runs XP VM under the covers when running xp programs which shows ability for parallel virtualization on a single hypervisor.&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization works: L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap to L0 because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5614</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5614"/>
		<updated>2010-11-27T03:04:43Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem:&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware. But apparently they were able to do it.&lt;br /&gt;
In addition, generally, nested virtualization is not supported on x86 systems (the architecture is not designed with that in mind) but for example vista runs XP VM under the covers when running xp programs which shows ability for parallel virtualization on a single hypervisor.&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations&lt;br /&gt;
&lt;br /&gt;
How the Nest VMX virtualization works:&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5613</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5613"/>
		<updated>2010-11-27T03:03:13Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem:&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware. But apparently they were able to do it.&lt;br /&gt;
In addition, generally, nested virtualization is not supported on x86 systems (the architecture is not designed with that in mind) but for example vista runs XP VM under the covers when running xp programs which shows ability for parallel virtualization on a single hypervisor.&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
2. Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
3. Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
4. Micro-Optimizations&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5612</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5612"/>
		<updated>2010-11-27T03:02:41Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem:&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware. But apparently they were able to do it.&lt;br /&gt;
In addition, generally, nested virtualization is not supported on x86 systems (the architecture is not designed with that in mind) but for example vista runs XP VM under the covers when running xp programs which shows ability for parallel virtualization on a single hypervisor.&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
1. Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
2. Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
3. Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
4. Micro-Optimizations&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5611</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5611"/>
		<updated>2010-11-27T02:33:31Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Theory (Section 3.1) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem:&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware. But apparently they were able to do it.&lt;br /&gt;
In addition, generally, nested virtualization is not supported on x86 systems (the architecture is not designed with that in mind) but for example vista runs XP VM under the covers when running xp programs which shows ability for parallel virtualization on a single hypervisor.&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5420</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5420"/>
		<updated>2010-11-23T00:32:44Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* General discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem:&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
&lt;br /&gt;
==The idea/goal==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware. But apparently they were able to do it.&lt;br /&gt;
In addition, generally, nested virtualization is not supported on x86 systems (the architecture is not designed with that in mind) but for example vista runs XP VM under the covers when running xp programs which shows ability for parallel virtualization on a single hypervisor.&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the hosy hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5024</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5024"/>
		<updated>2010-11-15T19:07:53Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Group members */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Smcilroy&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof moved you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5023</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5023"/>
		<updated>2010-11-15T19:06:46Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Smcilroy&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof moved you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4604</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4604"/>
		<updated>2010-10-15T07:31:30Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Information Maintenance Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were also part of the first UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address offset. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit address offsets. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; which spawns a new process, and &#039;stty&#039; which sets the mode of typewriter.The &#039;wait&#039; system call is used in both as well the only really difference is that in the Linux version wait store status information in a integer which take the integer itself as an argument, not a pointer to itself. In Linux there are a lot more system calls regarding this type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification. The &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface to getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole data that was sent and complete the transmission. The same concept holds for the reading where it will be read all the way through before reading another pipe or new information coming into the pipe.   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: &#039;&#039;msgget()&#039;&#039;,&#039;&#039;msgsnd()&#039;&#039;,&#039;&#039;msgrcv(),&#039;&#039;msgctl()&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Semaphores: &#039;&#039;semget()&#039;&#039;,&#039;&#039; semop()&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Shared Memory: &#039;&#039;shmget()&#039;&#039;, &#039;&#039;shmat()&#039;&#039;,&#039;&#039;shmdt()&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality. &lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4598</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4598"/>
		<updated>2010-10-15T07:23:46Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Information Maintenance Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; have been available since the first original UNIX (1971) and they are still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to give more control of the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except they takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call now follows symbolic links and therefore, they introduced a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set from access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. In the earliest version of Unix, to delete a directory, users needed to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; was added and helped solve the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file. As file system became more complex, these new system calls helped the users gain better control over them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were also part of the first UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in the Unix System V release 4(SVR4) where a &#039;&#039;write&#039;&#039; call could be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address offset. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit address offsets. It is still used in modern Linux and Unix systems. As of now, developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in the first version of UNIX in 1971. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls. Most of these changes were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the SVR4 came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this using the following commands: &#039;open&#039;, &#039;read&#039;, &#039;close&#039;, and &#039;write&#039;. &#039;open&#039; opens a file so the file can be written to or read from. &#039;read&#039; retrieves data from the file, and &#039;write&#039; modifies data in the file. &#039;close&#039; is used to indicate that the file is no longer in use. Linux uses the same set of commands for the same purposes.In addition to those system calls there Linux has there own unique system calls which are: &#039;olduname&#039; gets name and information about current kernel, similar to that is &#039;uname&#039; gets name and information about current kernel (which is used in the newer versions of UNIX not the older ones), &#039;iopl&#039; which changes I/O privilege level and &#039;sysfs&#039; which gets file system type information.&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these examples are common to both UNIX and Linux: &#039;stat&#039; gets file status, &#039;fork&#039; spawns a new process and &#039;stty&#039; sets the mode of typewriter. In Linux there are a lot more system calls regarding the third sub type and here are a few of them: &#039;capget&#039; gets the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification , &#039;capget&#039; and &#039;capset&#039; interact with the raw kernel interface for getting and setting thread capabilities. These two system calls are specific to Linux and as such the use of these functions (in particular the format of the cap_user_*_t types) are updated as the kernel is updated. The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
Process Control calls are system calls that handle the start, termination and other tasks that might be required &lt;br /&gt;
for a process to run correctly.&lt;br /&gt;
&lt;br /&gt;
In unix there are 10 system calls that make up Process Control Calls. These are:&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;,&#039;&#039;wait()&#039;&#039;,&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;,&#039;&#039;execve()&#039;&#039;,&#039;&#039;exit()&#039;&#039;,&#039;&#039;signal()&#039;&#039;,&#039;&#039;kill()&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;fork()&#039;&#039;: It takes a process and creates an identical processes, which in turn makes one the parent process and the &lt;br /&gt;
other the child process. When &#039;&#039;fork()&#039;&#039; succeds it returns 0 to the child process and returns the PID of the child process to the parent process. When it fails, &#039;&#039;fork()&#039;&#039; returns -1 to the parent process.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;wait()&#039;&#039;: This call makes a parent process wait for the child process to end. It returns the pid of the child process that is &lt;br /&gt;
done. Wait fails if the process has no child process to wait for or its points to an invalid address.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;,&#039;&#039;execlp()&#039;&#039;,&#039;&#039;execle()&#039;&#039;,&#039;&#039;execvp()&#039;&#039;,&#039;&#039;execv()&#039;&#039;, are system calls based on the same principle that the system call &lt;br /&gt;
takes as an argument a binary file and converts it into a process. When the system call works properly it does &lt;br /&gt;
not return, instead it gives control to the new process which replaces the process that called the system call.&lt;br /&gt;
each of these are called when different arguments are given.&lt;br /&gt;
&lt;br /&gt;
The following are the definitions for the these system calls as described by this [http://www.di.uevora.pt/~lmr/syscalls.html]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execl()&#039;&#039;: Takes the path name of an executable program (binary file) as its first argument.  The rest of the arguments are a list of command&lt;br /&gt;
line arguments to the new program (argv[]).  The list terminated with a null pointer&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execle()&#039;&#039;: Same as execl(), except that the end of the argument list is followed by a pointer to a null-terminated list of character&lt;br /&gt;
pointers that is passed a the environment of the new program&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execv()&#039;&#039;: Takes the path name of an executable program (binary file) as it first argument.  The second argument is a pointer to a list of&lt;br /&gt;
character pointers (like argv[]) that is passed as command line arguments to the new program.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;execve()&#039;&#039;: Same as execv(), except that a third argument is given as a pointer to a list of character pointers (like argv[]) that is passed as the environment of the new program.&lt;br /&gt;
       &lt;br /&gt;
&#039;&#039;execlp()&#039;&#039;: Same as execl(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of executable module.&lt;br /&gt;
&#039;&#039;execvp()&#039;&#039;: Same as execv(), except that the program name doesn&#039;t have to be a full path name, and it can be a shell program instead of an executable module.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;signal()&#039;&#039;: This system call is sent to the process when the proper conditions are met. When the program receives the signal it can act in three different ways. The first is to ignore completely, it wont matter how many times the signal is sent the process will not do anything because of it. The only signal that can&#039;t be ignored or caught is SIGKILL(). The second is to have the signal set to its default state which means when the process receives it, the process will end. The last option is to catch the signal, when this occurs the unix system will give control to a function that will execute according to how appropriate it is for the process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;kill()&#039;&#039;: The system sends a signal to the process when something occurs. It fails if the signal_name is not a correct signal. There is no process with the PID that matches the argument value.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;exit()&#039;&#039;: This call ends the process that calls it and returns the exit status value.&lt;br /&gt;
&lt;br /&gt;
In linux, all of these unix system calls have couterparts in linux except for the exec group of system calls, only execve exists. Also these system calls behave the same way in linux. However the system call &#039;&#039;signal()&#039;&#039; is not recommended to use because of its different implementations in different versions of linux and unix. It is better to use &#039;&#039;sigaction()&#039;&#039;. It changes the actions of the process when it receives a valid signal except SIGKILL and SIGSTOP. As newer versions of linux are released, these system calls will always never have major modifications but other system calls, based on these, may be created because specific cases which would make it easier to write programs.&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
The communication calls relates to the concept of processes having the ability to communicate with one another. Similar to how humans use a telephone as their portal to communicate with eachother, communication calls use &amp;quot;pipes&amp;quot; as their gateway. &lt;br /&gt;
&lt;br /&gt;
In unix there are four subgroups of system calls that are related to communications calls: pipelines, messages, semaphores, and shared memory.&lt;br /&gt;
The following are the system calls that belong to each of the subgroups.  &lt;br /&gt;
&lt;br /&gt;
Pipelines: &#039;&#039;pipe()&#039;&#039; The pipe() command consists of two components. int pipe (file_descriptors) &amp;amp; int file_descriptors[2]. File_descriptors is an array consisting of two parts as well. One is for reading the data, and the other is for writing the data. Both writing and reading data will read in sequential order along with fully completing it&#039;s task. I.e.) There are no partial writes, the pipe will write the whole   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Messages: &#039;&#039;msgget()&#039;&#039;,&#039;&#039;msgsnd()&#039;&#039;,&#039;&#039;msgrcv(),&#039;&#039;msgctl()&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Semaphores: &#039;&#039;semget()&#039;&#039;,&#039;&#039; semop()&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Shared Memory: &#039;&#039;shmget()&#039;&#039;, &#039;&#039;shmat()&#039;&#039;,&#039;&#039;shmdt()&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality. &lt;br /&gt;
&lt;br /&gt;
System Calls have been an essential component to the structure of the Linux Kernel(2.6.30+) and Unix Operating Systems for a long period of time. They are the gateway between the user space and the kernel services. More specifically, it allows the User space to acquire the kernel services unlike processes which do not have this authority. Over the years of development in the Linux and Unix OS, the system calls have not had drastic changes to them. Rather than having radical changes to system calls, the development of system calls has merely added more specific system calls to solve new issues that occur within the OS. Hence, this concept has led the birth of 35 system calls to grow to an astonishing quantity consisting of hundreds of system calls. With hundreds of system calls available at ones disposable, all can be catgeorized into 6 major groups: file management, device management, information maintenance, process control, communications and miscellaneous calls. Operating Systems are a colossal program consisiting of very intrinsic pieces all coming together to form what we now know today as Linux Kernel(2.6.30+) or Unix. System calls are simply a small building block, but nevertheless an essential piece, to the tower that is our Operating System. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;br /&gt;
&lt;br /&gt;
Mendonça Rato, Luís Miguel,professor,university of evora.  http://www.di.uevora.pt/~lmr/syscalls.html&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4101</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4101"/>
		<updated>2010-10-14T21:27:06Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Information Maintenance Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; has been available since the first original UNIX (1971) and it is still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to be able to give more control on the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except it takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call follows symbolic link and introduce a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set either access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are only linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. At that point, to delete a directory, users would need to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; and solved the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were all part of earliest UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in SVR4 where a write can be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit addresses. It is still used in modern Linux and Unix systems even if developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in UNIX in the 70s. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls most of them were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the System V Unix revision 4(SVR4) came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this by....&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these example are common to both UNIX and Linux are: &#039;stat&#039; gets file status, &#039;fork&#039; spawns new process and &#039;stty&#039; sets mode of typewriter. In Linux there a lot more system calls regarding the third sub type here are a few of them: &#039;capget&#039; get the capabilities of the process, &#039;capset&#039; sets the capabilities process, &#039;getppid&#039; gets process identification(add more) The &#039;capget&#039; and &#039;capset&#039; interacted with the raw kernel interface for getting and setting thread capabilities. These two system calls are specific to Linux and as suck the us of these functions (in particular the format of the cap_user_*_t types) are update as the kernel is update.The &#039;getppid&#039; returns the process ID of the calling process and never has any errors.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4042</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4042"/>
		<updated>2010-10-14T20:06:14Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Information Maintenance Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; has been available since the first original UNIX (1971) and it is still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to be able to give more control on the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except it takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call follows symbolic link and introduce a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set either access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are only linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. At that point, to delete a directory, users would need to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; and solved the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were all part of earliest UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in SVR4 where a write can be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit addresses. It is still used in modern Linux and Unix systems even if developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in UNIX in the 70s. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls most of them were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the System V Unix revision 4(SVR4) came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system calls, one must explore the three sub-types of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date. In Linux, this can be done by a few different system calls, there are: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. &#039;stime&#039; could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039;, which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source (apart from declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this by....&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these example are common to both UNIX and Linux are: &#039;stat&#039; gets file status, &#039;fork&#039; spawns new process and &#039;stty&#039; sets mode of typewriter. In Linux there alot more system calls regaruding the third sub type here are a few of them: &#039;capget&#039; get the capbailites of the proccess, &#039;capset&#039; sets the capabilities process(add more) The &#039;capget&#039; and &#039;capset&#039; interacted with the raw kernel interface for getting and setting thread capabilities. These two system calls are specific to Linux and as suck the us of these functions (in particular the format of the cap_user_*_t types) are update as the kernel is update.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4001</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=4001"/>
		<updated>2010-10-14T19:41:23Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Information Maintenance Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; has been available since the first original UNIX (1971) and it is still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to be able to give more control on the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except it takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call follows symbolic link and introduce a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set either access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are only linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. At that point, to delete a directory, users would need to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; and solved the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were all part of earliest UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in SVR4 where a write can be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit addresses. It is still used in modern Linux and Unix systems even if developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in UNIX in the 70s. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls most of them were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the System V Unix revision 4(SVR4) came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system call one must explore the three sub type of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date, in Linux this can be done by a few different system call, there is: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. Stime could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039; which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source(apart for declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this by....&lt;br /&gt;
&lt;br /&gt;
The third sub type is get/set process, file, or device attributes, in UNIX there are several system calls for processing file and device attributes, some of these example are: &#039;stat&#039; gets file status, &#039;fork&#039; spawns new process and &#039;stty&#039; sets mode of typewriter.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_2&amp;diff=3994</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_2&amp;diff=3994"/>
		<updated>2010-10-14T19:32:20Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Not in this group and I&#039;m not completely sure if this is relevant but I found that UNIX used the POSIX standard while Linux used LSB which is based on the POSIX standard. &lt;br /&gt;
This article outlines some conflicts between them [https://www.opengroup.org/platform/single_unix_specification/uploads/40/13450/POSIX_and_Linux_Application_Compatibility_Final_-_v1.0.pdf]. I didn&#039;t find the actual comparisons very comprehensible but the ideas are there. --[[User:Slay|Slay]] 15:05, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Uh, where did Figure 1 and much of the current text come from?  It looks like it was cut and pasted from random source.  Please don&#039;t plagarize!  --[[User:Soma|Anil]] (19:24, 8 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
Look into the reference article &amp;quot;Kernel command using Linux system calls&amp;quot;. Plagiarism is not my goal. I&#039;m using my own words to make a simple but complete description of a system call using the interrupt method. Check the references and If you think it is too close, please let me know. It is hard when an author makes such a good and clear description.--[[User:Sblais2|Sblais2]] 21:02, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I thought it would be nice to first describe what is a system calls and the two current methods of doing them. The first is the interrupt method. The second which is used in Linux 2.6.18+ is using the sysenter and sysexit instructions.--[[User:Sblais2|Sblais2]] 19:56, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
You can&#039;t use that figure.  And you can&#039;t copy the text either, even if you change the words slightly.  But really, you&#039;re just wasting your time.  This question is not talking about how system calls are invoked; if you wanted to discuss this, you should be discussing system call invocation mechanisms on the PDP-11 and VAX systems!  Here I&#039;m interested in what are the calls, i.e., kernel functions that can be invoked by a regular program.--[[User:Soma|Anil]]&lt;br /&gt;
&lt;br /&gt;
This link provides about 40 UNIX system calls along with example on where they would be used from the looks of it: [http://www.di.uevora.pt/~lmr/syscalls.html]. --[[User:Apharan2|Apharan2]]&lt;br /&gt;
&lt;br /&gt;
Thank you for clarifying things. I will go that route. --[[User:Sblais2|Sblais2]] 13:08, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I don&#039;t see everyone contributing to this group. Please do let the others in your group know, divide your work into sections and discuss here. If you have questions - ask.--[[User:praman|Preeti]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This link shows all the system calls from Linux 2.6.33 [http://www.kernel.org/doc/man-pages/online/pages/man2/syscalls.2.html]&lt;br /&gt;
--[[User:Sblais2|Sblais2]] 23:36, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
As no one in our group made suggestion on the format of our essay, I&#039;ve put one in place. In your research, each system calls should fit in one of the category. If someone picks one up. Please let me know ASAP. I will be working on that all day. Read the intro&#039;s last paragraph if your not sure what you should write on. My english writing skills are not perfect so if one of you guys see ways to improved the text, please do. --[[User:Sblais2|Sblais2]] 14:29, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Well I feel like I&#039;m the only one in that team but...Anyway I&#039;ve completed the first 2 sections. Please try working on the next 4.  If you want to modify something, please post a small gist of it in here so we can all validate. Thanks. --[[User:Sblais2|Sblais2]] 20:10, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I am wondering if we have actually split up the work accordingly i am going to a temped to answer Information Maintenance if anyone has dibs please let me know - Csulliva&lt;br /&gt;
I am finding this site that I find very helpfull to understand system called for linux an UNIX &lt;br /&gt;
http://ss64.com/bash/&lt;br /&gt;
-Csulliva&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ll do process control calls. and help out on the last part that is not written up yet.I&#039;ll read the other parts as well just to get an understanding. [[User:Apharan2|Apharan2]]&lt;br /&gt;
&lt;br /&gt;
Csullliva, be careful not to confuse system calls and shell commands. Some of them have actually the same name, like &#039;&#039;mkdir&#039;&#039;. But shell commands is on the user-level. Some of them will actually do system calls to complete the operation.&lt;br /&gt;
It&#039;s good to finally hear from you guys.--[[User:Sblais2|Sblais2]] 01:39, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;ll work on the communication calls and the miscellaneous system calls. Not sure if you guys wanted to add more to this, but I can help out with writing a conclusion as well. I&#039;m somewhat good at writing, so if i see any little things that I could touch up on, i&#039;ll help out with that.-R.arteaga&lt;br /&gt;
&lt;br /&gt;
after my confusion with bash and system calls i had some trouble find a  system calls for linux that would effect time here a web site I found that helped me out with description regarding the calls http://www.digilife.be/quickreferences/QRC/LINUX%20System%20Call%20Quick%20Reference.pdf&lt;br /&gt;
hopefully i am on the write track now....some one stop me if i am not -Csulliva&lt;br /&gt;
&lt;br /&gt;
Looks ok to me Csulliva, might want to check the link I posted in this discussion. It shows all the system calls in the Linux kernel 2.6.30. Then even show history information in it. It is then easy to track early Unix implementation. R.arteaga -&amp;gt; That would be great. I started thinking about a conclusion but writing is not my forte (unless I am underestimating myself). --[[User:Sblais2|Sblais2]] 11:59, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Also not to forget to add your references. I added mine in the reference section. --[[User:Sblais2|Sblais2]] 19:02, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
My writing skills have never been any good. I&#039;ve read through a whole bunch of the page and fixed a few typos here and there. I want to add to the &amp;quot;Information Maintenance Calls&amp;quot; section, but I can&#039;t promise that it will be any good. So feel free to help me out. -Dlangloi&lt;br /&gt;
&lt;br /&gt;
okay well i am currently working on the information maintenance calls part and i am a little stuck on the type system data so by all means help out.. that being said can anyone give me a hint on a system call that works with a system data in UNIX because i read the manual and i still drawing a blank, thanks-Csulliva&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3992</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3992"/>
		<updated>2010-10-14T19:29:34Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Information Maintenance Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition using sysenter and sysexit instructions (Hayward, Mike. Intel P6 vs P7 system call performance. [http://lkml.org/lkml/2002/12/9/13], December 9, 2002).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; has been available since the first original UNIX (1971) and it is still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to be able to give more control on the file system to the applications. The call &#039;&#039;chroot&#039;&#039;  allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except it takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call follows symbolic link and introduce a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set either access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are only linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. At that point, to delete a directory, users would need to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; and solved the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were all part of earliest UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in SVR4 where a write can be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit addresses. It is still used in modern Linux and Unix systems even if developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the Linux 2.6.16 build, multiple system calls were created so that the calls could deal with relative pathnames as arguments. They can easily be spotted as the system call names all finish with &#039;at&#039;. Here is a sample list of the created system calls: &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039;, &#039;&#039;fchmodat&#039;&#039;, &#039;&#039;fchownat&#039;&#039;, &#039;&#039;fstatat&#039;&#039;, &#039;&#039;linkat&#039;&#039;, &#039;&#039;unlinkat&#039;&#039;, &#039;&#039;renameat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in UNIX in the 70s. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls most of them were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the System V Unix revision 4(SVR4) came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process, file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system call one must explore the three sub type of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The first sub type is Get/set of time and/or date, in Linux this can be done by a few different system call, there is: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds and a few other ones like &#039;ftime&#039;.In the earliest versions UNIX the used the system call was &#039;stime&#039;, which was used to  interact with time and dates. Stime could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039; which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field call in the kernel source(apart for declaration) is a bug thus failing. &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this by....&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
&lt;br /&gt;
Salus, Peter H. A Quarter Century of Unix, Publisher: Addison-Wesly Professional, June 10, 1994.&lt;br /&gt;
&lt;br /&gt;
Unix Programming Manual, http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html, November 3, 1971.&lt;br /&gt;
&lt;br /&gt;
BSD System Calls Manual. http://www.unix.com/man-page/FreeBSD/2/,  The Unix and Linux Forums.&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3807</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3807"/>
		<updated>2010-10-14T15:22:26Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Information Maintenance Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition (using sysenter and sysexit instructions).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; has been available since the earliest UNIX and it is still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to be able to give more control on the file system to the applications. The call &#039;&#039;chroot&#039;&#039; allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except it takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call follows symbolic link and introduce a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links. As of Linux 2.6.16, &#039;&#039;fchownat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039; was added. It operates the same way as &#039;&#039;chown&#039;&#039; and &#039;&#039;chmod&#039;&#039; but takes more arguments to deal with relative pathnames. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set either access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are only linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. At that point, to delete a directory, users would need to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; and solved the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file.As of Linux 2.6.16, &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039; and &#039;&#039;renameat&#039;&#039; was added. It for the same reasons as &#039;&#039;fchmodat&#039;&#039; and &#039;&#039;fchmownat&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were all part of earliest UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in SVR4 where a write can be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit addresses. It is still used in modern Linux and Unix systems even if developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. The ‘‘fstatat’’ system call was added to Linux kernel 2.6.16. This is again for the same reason as ‘‘openat’’. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. Again in Linux 2.6.16, &#039;&#039;linkat&#039;&#039; and &#039;&#039;unlinkat&#039;&#039; were added for the same reasons as &#039;&#039;openat&#039;&#039;, &#039;&#039;fchmadat&#039;&#039; and &#039;&#039;fchownat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in UNIX in the 70s. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls most of them were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the System V Unix revision 4(SVR4) came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process,file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system call one must explore the three sub type of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The First sub type is Get/set of time and/or date, in Linux this can be done by a few different system call, there is: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;time&#039; returns the time in seconds. The earliest versions UNIX used the system call &#039;stime&#039; to interacted with time and dates. Stime could return the time and date and sets the system’s idea of the time and date by altering the seconds. &#039;stime&#039; is still being used by Linux because it is successful, unlike &#039;settimeofday&#039; which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field in the kernel source(apart for declaration) is a bug thus failing. (come back to) &lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this by....&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
Here is the original manual --[[User:Lmundt|Lmundt]] 18:29, 7 October 2010 (UTC)&lt;br /&gt;
http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3722</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3722"/>
		<updated>2010-10-14T13:35:59Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Information Maintenance Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition (using sysenter and sysexit instructions).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; has been available since the earliest UNIX and it is still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to be able to give more control on the file system to the applications. The call &#039;&#039;chroot&#039;&#039; allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except it takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call follows symbolic link and introduce a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links. As of Linux 2.6.16, &#039;&#039;fchownat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039; was added. It operates the same way as &#039;&#039;chown&#039;&#039; and &#039;&#039;chmod&#039;&#039; but takes more arguments to deal with relative pathnames. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set either access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are only linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. At that point, to delete a directory, users would need to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; and solved the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file.As of Linux 2.6.16, &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039; and &#039;&#039;renameat&#039;&#039; was added. It for the same reasons as &#039;&#039;fchmodat&#039;&#039; and &#039;&#039;fchmownat&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were all part of earliest UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in SVR4 where a write can be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit addresses. It is still used in modern Linux and Unix systems even if developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. The ‘‘fstatat’’ system call was added to Linux kernel 2.6.16. This is again for the same reason as ‘‘openat’’. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. Again in Linux 2.6.16, &#039;&#039;linkat&#039;&#039; and &#039;&#039;unlinkat&#039;&#039; were added for the same reasons as &#039;&#039;openat&#039;&#039;, &#039;&#039;fchmadat&#039;&#039; and &#039;&#039;fchownat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in UNIX in the 70s. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls most of them were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the System V Unix revision 4(SVR4) came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process,file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system call one must explore the three sub type of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The First sub type is Get/set of time and/or date, in Linux this can be done by a few different system call, there is: &#039;gettimeofday&#039; to get the time, &#039;settimeofday&#039; to set it, &#039;date&#039;, &#039;ftime&#039; returns the time and date. In the earliest versions UNIX the system call  that was used to interacted with the date and time was &#039;stime&#039; which could only set it based on seconds and get the time. &#039;stime&#039; is still being used by Linux but because it works unlike &#039;settimeofday&#039; which was created to change timezones (tz_dsttime) as well as the time but each occurrence of this field in the kernel source(apart for declaration) is a bug.&lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this by....&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
Here is the original manual --[[User:Lmundt|Lmundt]] 18:29, 7 October 2010 (UTC)&lt;br /&gt;
http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3709</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3709"/>
		<updated>2010-10-14T13:17:00Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Information Maintenance Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition (using sysenter and sysexit instructions).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; has been available since the earliest UNIX and it is still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to be able to give more control on the file system to the applications. The call &#039;&#039;chroot&#039;&#039; allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except it takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call follows symbolic link and introduce a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links. As of Linux 2.6.16, &#039;&#039;fchownat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039; was added. It operates the same way as &#039;&#039;chown&#039;&#039; and &#039;&#039;chmod&#039;&#039; but takes more arguments to deal with relative pathnames. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set either access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are only linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. At that point, to delete a directory, users would need to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; and solved the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file.As of Linux 2.6.16, &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039; and &#039;&#039;renameat&#039;&#039; was added. It for the same reasons as &#039;&#039;fchmodat&#039;&#039; and &#039;&#039;fchmownat&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were all part of earliest UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in SVR4 where a write can be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit addresses. It is still used in modern Linux and Unix systems even if developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. The ‘‘fstatat’’ system call was added to Linux kernel 2.6.16. This is again for the same reason as ‘‘openat’’. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. Again in Linux 2.6.16, &#039;&#039;linkat&#039;&#039; and &#039;&#039;unlinkat&#039;&#039; were added for the same reasons as &#039;&#039;openat&#039;&#039;, &#039;&#039;fchmadat&#039;&#039; and &#039;&#039;fchownat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in UNIX in the 70s. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls most of them were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the System V Unix revision 4(SVR4) came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process,file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system call one must explore the three sub type of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The First sub type is Get/set time or date, in Linux this is done by the system call &#039;gettimeofday&#039;  to get the date and the time  and &#039;settimeofday&#039; to set it. In the earliest versions UNIX the system call  that was used to inhteracted with the date and time was &#039;stime&#039; which only set it based on seconds. &#039;stime&#039; is still being used by Linux but because there was no way to set up the timezone......  easy way in to change anything else like timezones and dates &#039;stime&#039; was evolved into &#039;settimeofday&#039; and they added a get time and day system call because there was none before.&lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this by....&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
Here is the original manual --[[User:Lmundt|Lmundt]] 18:29, 7 October 2010 (UTC)&lt;br /&gt;
http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3681</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3681"/>
		<updated>2010-10-14T07:06:19Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Information Maintenance Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition (using sysenter and sysexit instructions).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; has been available since the earliest UNIX and it is still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to be able to give more control on the file system to the applications. The call &#039;&#039;chroot&#039;&#039; allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except it takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call follows symbolic link and introduce a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links. As of Linux 2.6.16, &#039;&#039;fchownat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039; was added. It operates the same way as &#039;&#039;chown&#039;&#039; and &#039;&#039;chmod&#039;&#039; but takes more arguments to deal with relative pathnames. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set either access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are only linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. At that point, to delete a directory, users would need to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; and solved the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file.As of Linux 2.6.16, &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039; and &#039;&#039;renameat&#039;&#039; was added. It for the same reasons as &#039;&#039;fchmodat&#039;&#039; and &#039;&#039;fchmownat&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were all part of earliest UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in SVR4 where a write can be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit addresses. It is still used in modern Linux and Unix systems even if developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. The ‘‘fstatat’’ system call was added to Linux kernel 2.6.16. This is again for the same reason as ‘‘openat’’. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. Again in Linux 2.6.16, &#039;&#039;linkat&#039;&#039; and &#039;&#039;unlinkat&#039;&#039; were added for the same reasons as &#039;&#039;openat&#039;&#039;, &#039;&#039;fchmadat&#039;&#039; and &#039;&#039;fchownat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in UNIX in the 70s. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls most of them were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the System V Unix revision 4(SVR4) came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process,file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system call one must explore the three sub type of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The First sub type is Get/set time or date, in Linux this is done by the system call &#039;gettimeofday&#039;  to get the date and the time  and &#039;settimeofday&#039; to set it. In the earliest versions UNIX the system call  that was used to interacted with the date and time was &#039;stime&#039; which only set it based on seconds. &#039;stime&#039; is still being used by Linux but because there was no easy way in to change anything else like timezones and dates &#039;stime&#039; was evolved into &#039;settimeofday&#039; and they added a get time and day system call because there was none before.&lt;br /&gt;
&lt;br /&gt;
The second sub type is get/set system data. UNIX does this by....&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
Here is the original manual --[[User:Lmundt|Lmundt]] 18:29, 7 October 2010 (UTC)&lt;br /&gt;
http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3674</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3674"/>
		<updated>2010-10-14T06:36:15Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Information Maintenance Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition (using sysenter and sysexit instructions).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; has been available since the earliest UNIX and it is still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to be able to give more control on the file system to the applications. The call &#039;&#039;chroot&#039;&#039; allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except it takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call follows symbolic link and introduce a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links. As of Linux 2.6.16, &#039;&#039;fchownat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039; was added. It operates the same way as &#039;&#039;chown&#039;&#039; and &#039;&#039;chmod&#039;&#039; but takes more arguments to deal with relative pathnames. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set either access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are only linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. At that point, to delete a directory, users would need to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; and solved the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file.As of Linux 2.6.16, &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039; and &#039;&#039;renameat&#039;&#039; was added. It for the same reasons as &#039;&#039;fchmodat&#039;&#039; and &#039;&#039;fchmownat&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were all part of earliest UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in SVR4 where a write can be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit addresses. It is still used in modern Linux and Unix systems even if developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. The ‘‘fstatat’’ system call was added to Linux kernel 2.6.16. This is again for the same reason as ‘‘openat’’. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. Again in Linux 2.6.16, &#039;&#039;linkat&#039;&#039; and &#039;&#039;unlinkat&#039;&#039; were added for the same reasons as &#039;&#039;openat&#039;&#039;, &#039;&#039;fchmadat&#039;&#039; and &#039;&#039;fchownat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in UNIX in the 70s. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls most of them were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the System V Unix revision 4(SVR4) came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process,file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system call one must explore the three sub type of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The First sub type is Get/set time or date, in Linux this is done by the system call &#039;gettimeofday&#039;  to get the date and the time  and &#039;settimeofday&#039; to set it. In UNIX the system call is done by &#039;stime&#039;&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
Here is the original manual --[[User:Lmundt|Lmundt]] 18:29, 7 October 2010 (UTC)&lt;br /&gt;
http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3667</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3667"/>
		<updated>2010-10-14T06:04:24Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Information Maintenance Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;t access kernel memory and it can&#039;t call kernel functions. When the CPU prevents a process from accessing the kernel, this prevention is commonly known as, the protected mode. On the other hand, system calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition (using sysenter and sysexit instructions).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 major categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to describe the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The system calls in this group deal with every type of operation that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; has been available since the earliest UNIX and it is still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls were added to be able to give more control on the file system to the applications. The call &#039;&#039;chroot&#039;&#039; allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except it takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call follows symbolic link and introduce a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links. As of Linux 2.6.16, &#039;&#039;fchownat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039; was added. It operates the same way as &#039;&#039;chown&#039;&#039; and &#039;&#039;chmod&#039;&#039; but takes more arguments to deal with relative pathnames. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set either access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are only linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. At that point, to delete a directory, users would need to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; and solved the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file.As of Linux 2.6.16, &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039; and &#039;&#039;renameat&#039;&#039; was added. It for the same reasons as &#039;&#039;fchmodat&#039;&#039; and &#039;&#039;fchmownat&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were all part of earliest UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in SVR4 where a write can be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit addresses. It is still used in modern Linux and Unix systems even if developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. The ‘‘fstatat’’ system call was added to Linux kernel 2.6.16. This is again for the same reason as ‘‘openat’’. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. Again in Linux 2.6.16, &#039;&#039;linkat&#039;&#039; and &#039;&#039;unlinkat&#039;&#039; were added for the same reasons as &#039;&#039;openat&#039;&#039;, &#039;&#039;fchmadat&#039;&#039; and &#039;&#039;fchownat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in UNIX in the 70s. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls most of them were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the System V Unix revision 4(SVR4) came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information maintenance calls are system calls that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process,file, or device attributes. To fully understand the difference between Linux and UNIX in regrades to system call one must explore the three sub type of information maintenance calls and see how they have changed over time.&lt;br /&gt;
&lt;br /&gt;
The First sub type is Get/set time or date, in Linux this is done by the system call....&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
Here is the original manual --[[User:Lmundt|Lmundt]] 18:29, 7 October 2010 (UTC)&lt;br /&gt;
http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_2&amp;diff=3662</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_2&amp;diff=3662"/>
		<updated>2010-10-14T05:51:10Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Not in this group and I&#039;m not completely sure if this is relevant but I found that UNIX used the POSIX standard while Linux used LSB which is based on the POSIX standard. &lt;br /&gt;
This article outlines some conflicts between them [https://www.opengroup.org/platform/single_unix_specification/uploads/40/13450/POSIX_and_Linux_Application_Compatibility_Final_-_v1.0.pdf]. I didn&#039;t find the actual comparisons very comprehensible but the ideas are there. --[[User:Slay|Slay]] 15:05, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Uh, where did Figure 1 and much of the current text come from?  It looks like it was cut and pasted from random source.  Please don&#039;t plagarize!  --[[User:Soma|Anil]] (19:24, 8 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
Look into the reference article &amp;quot;Kernel command using Linux system calls&amp;quot;. Plagiarism is not my goal. I&#039;m using my own words to make a simple but complete description of a system call using the interrupt method. Check the references and If you think it is too close, please let me know. It is hard when an author makes such a good and clear description.--[[User:Sblais2|Sblais2]] 21:02, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I thought it would be nice to first describe what is a system calls and the two current methods of doing them. The first is the interrupt method. The second which is used in Linux 2.6.18+ is using the sysenter and sysexit instructions.--[[User:Sblais2|Sblais2]] 19:56, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
You can&#039;t use that figure.  And you can&#039;t copy the text either, even if you change the words slightly.  But really, you&#039;re just wasting your time.  This question is not talking about how system calls are invoked; if you wanted to discuss this, you should be discussing system call invocation mechanisms on the PDP-11 and VAX systems!  Here I&#039;m interested in what are the calls, i.e., kernel functions that can be invoked by a regular program.--[[User:Soma|Anil]]&lt;br /&gt;
&lt;br /&gt;
This link provides about 40 UNIX system calls along with example on where they would be used from the looks of it: [http://www.di.uevora.pt/~lmr/syscalls.html]. --[[User:Apharan2|Apharan2]]&lt;br /&gt;
&lt;br /&gt;
Thank you for clarifying things. I will go that route. --[[User:Sblais2|Sblais2]] 13:08, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I don&#039;t see everyone contributing to this group. Please do let the others in your group know, divide your work into sections and discuss here. If you have questions - ask.--[[User:praman|Preeti]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This link shows all the system calls from Linux 2.6.33 [http://www.kernel.org/doc/man-pages/online/pages/man2/syscalls.2.html]&lt;br /&gt;
--[[User:Sblais2|Sblais2]] 23:36, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
As no one in our group made suggestion on the format of our essay, I&#039;ve put one in place. In your research, each system calls should fit in one of the category. If someone picks one up. Please let me know ASAP. I will be working on that all day. Read the intro&#039;s last paragraph if your not sure what you should write on. My english writing skills are not perfect so if one of you guys see ways to improved the text, please do. --[[User:Sblais2|Sblais2]] 14:29, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Well I feel like I&#039;m the only one in that team but...Anyway I&#039;ve completed the first 2 sections. Please try working on the next 4.  If you want to modify something, please post a small gist of it in here so we can all validate. Thanks. --[[User:Sblais2|Sblais2]] 20:10, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I am wondering if we have actually split up the work accordingly i am going to a temped to answer Information Maintenance if anyone has dibs please let me know - Csulliva&lt;br /&gt;
I am finding this site that I find very helpfull to understand system called for linux an UNIX &lt;br /&gt;
http://ss64.com/bash/&lt;br /&gt;
-Csulliva&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I&#039;ll do process control calls. and help out on the last part that is not written up yet.I&#039;ll read the other parts as well just to get an understanding. [[User:Apharan2|Apharan2]]&lt;br /&gt;
&lt;br /&gt;
Csullliva, be careful not to confuse system calls and shell commands. Some of them have actually the same name, like &#039;&#039;mkdir&#039;&#039;. But shell commands is on the user-level. Some of them will actually do system calls to complete the operation.&lt;br /&gt;
It&#039;s good to finally hear from you guys.--[[User:Sblais2|Sblais2]] 01:39, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;ll work on the communication calls and the miscellaneous system calls. Not sure if you guys wanted to add more to this, but I can help out with writing a conclusion as well. I&#039;m somewhat good at writing, so if i see any little things that I could touch up on, i&#039;ll help out with that.-R.arteaga&lt;br /&gt;
&lt;br /&gt;
after my confusion with bash and system calls i had some trouble find a  system calls for linux that would effect time here a web site I found that helped me out with description regarding the calls http://www.digilife.be/quickreferences/QRC/LINUX%20System%20Call%20Quick%20Reference.pdf&lt;br /&gt;
hopefully i am on the write track now....some one stop me if i am not -Csulliva&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3393</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3393"/>
		<updated>2010-10-13T22:39:13Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Information Maintenance Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;y access kernel memory and it can&#039;t call kernel functions. The CPU prevents this (called &amp;quot;protected mode&amp;quot;). System calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition (using sysenter and sysexit instructions).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to described the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The systems calls in this group deals with every type of operations that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; has been available since the earliest UNIX and it is still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls where added to be able to give more control on the file system to the applications. The call &#039;&#039;chroot&#039;&#039; allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except it takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call follows symbolic link and introduce a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links. As of Linux 2.6.16, &#039;&#039;fchownat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039; was added. It operates the same way as &#039;&#039;chown&#039;&#039; and &#039;&#039;chmod&#039;&#039; but takes more arguments to deal with relative pathnames. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set either access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are only linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. At that point, to delete a directory, users would need to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; and solved the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file.As of Linux 2.6.16, &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039; and &#039;&#039;renameat&#039;&#039; was added. It for the same reasons as &#039;&#039;fchmodat&#039;&#039; and &#039;&#039;fchmownat&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were all part of earliest UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in SVR4 where a write can be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit addresses. It is still used in modern Linux and Unix systems even if developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. The ‘‘fstatat’’ system call was added to Linux kernel 2.6.16. This is again for the same reason as ‘‘openat’’. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. Again in Linux 2.6.16, &#039;&#039;linkat&#039;&#039; and &#039;&#039;unlinkat&#039;&#039; were added for the same reasons as &#039;&#039;openat&#039;&#039;, &#039;&#039;fchmadat&#039;&#039; and &#039;&#039;fchownat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in UNIX in the 70s. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls most of them were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the System V Unix revision 4(SVR4) came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
Information Maintenace calls are system calles that return the computers personal information back to the user or change it completely. These type of calls can be split up into three groups get/set time or date, get/set system data and get/set process,file, or device attributes.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
Here is the original manual --[[User:Lmundt|Lmundt]] 18:29, 7 October 2010 (UTC)&lt;br /&gt;
http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3391</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3391"/>
		<updated>2010-10-13T22:37:58Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Information Maintenance Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;y access kernel memory and it can&#039;t call kernel functions. The CPU prevents this (called &amp;quot;protected mode&amp;quot;). System calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition (using sysenter and sysexit instructions).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to described the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The systems calls in this group deals with every type of operations that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; has been available since the earliest UNIX and it is still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls where added to be able to give more control on the file system to the applications. The call &#039;&#039;chroot&#039;&#039; allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except it takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call follows symbolic link and introduce a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links. As of Linux 2.6.16, &#039;&#039;fchownat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039; was added. It operates the same way as &#039;&#039;chown&#039;&#039; and &#039;&#039;chmod&#039;&#039; but takes more arguments to deal with relative pathnames. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set either access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are only linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. At that point, to delete a directory, users would need to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; and solved the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file.As of Linux 2.6.16, &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039; and &#039;&#039;renameat&#039;&#039; was added. It for the same reasons as &#039;&#039;fchmodat&#039;&#039; and &#039;&#039;fchmownat&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were all part of earliest UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in SVR4 where a write can be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit addresses. It is still used in modern Linux and Unix systems even if developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. The ‘‘fstatat’’ system call was added to Linux kernel 2.6.16. This is again for the same reason as ‘‘openat’’. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. Again in Linux 2.6.16, &#039;&#039;linkat&#039;&#039; and &#039;&#039;unlinkat&#039;&#039; were added for the same reasons as &#039;&#039;openat&#039;&#039;, &#039;&#039;fchmadat&#039;&#039; and &#039;&#039;fchownat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in UNIX in the 70s. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls most of them were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the System V Unix revision 4(SVR4) came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
 Information Maintenace calls are system calles that return the computers personal information back to the user or change it completely. This type of calls can be split up into three groups get/set time or date, get/set system data and get/set process,file, or device attributes.&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
Here is the original manual --[[User:Lmundt|Lmundt]] 18:29, 7 October 2010 (UTC)&lt;br /&gt;
http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_2&amp;diff=3388</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_2&amp;diff=3388"/>
		<updated>2010-10-13T22:33:54Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Not in this group and I&#039;m not completely sure if this is relevant but I found that UNIX used the POSIX standard while Linux used LSB which is based on the POSIX standard. &lt;br /&gt;
This article outlines some conflicts between them [https://www.opengroup.org/platform/single_unix_specification/uploads/40/13450/POSIX_and_Linux_Application_Compatibility_Final_-_v1.0.pdf]. I didn&#039;t find the actual comparisons very comprehensible but the ideas are there. --[[User:Slay|Slay]] 15:05, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Uh, where did Figure 1 and much of the current text come from?  It looks like it was cut and pasted from random source.  Please don&#039;t plagarize!  --[[User:Soma|Anil]] (19:24, 8 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
Look into the reference article &amp;quot;Kernel command using Linux system calls&amp;quot;. Plagiarism is not my goal. I&#039;m using my own words to make a simple but complete description of a system call using the interrupt method. Check the references and If you think it is too close, please let me know. It is hard when an author makes such a good and clear description.--[[User:Sblais2|Sblais2]] 21:02, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I thought it would be nice to first describe what is a system calls and the two current methods of doing them. The first is the interrupt method. The second which is used in Linux 2.6.18+ is using the sysenter and sysexit instructions.--[[User:Sblais2|Sblais2]] 19:56, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
You can&#039;t use that figure.  And you can&#039;t copy the text either, even if you change the words slightly.  But really, you&#039;re just wasting your time.  This question is not talking about how system calls are invoked; if you wanted to discuss this, you should be discussing system call invocation mechanisms on the PDP-11 and VAX systems!  Here I&#039;m interested in what are the calls, i.e., kernel functions that can be invoked by a regular program.--[[User:Soma|Anil]]&lt;br /&gt;
&lt;br /&gt;
This link provides about 40 UNIX system calls along with example on where they would be used from the looks of it: [http://www.di.uevora.pt/~lmr/syscalls.html]. --[[User:Apharan2|Apharan2]]&lt;br /&gt;
&lt;br /&gt;
Thank you for clarifying things. I will go that route. --[[User:Sblais2|Sblais2]] 13:08, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I don&#039;t see everyone contributing to this group. Please do let the others in your group know, divide your work into sections and discuss here. If you have questions - ask.--[[User:praman|Preeti]]&lt;br /&gt;
&lt;br /&gt;
This link shows all the system calls from Linux 2.6.33 [http://www.kernel.org/doc/man-pages/online/pages/man2/syscalls.2.html]&lt;br /&gt;
--[[User:Sblais2|Sblais2]] 23:36, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
As no one in our group made suggestion on the format of our essay, I&#039;ve put one in place. In your research, each system calls should fit in one of the category. If someone picks one up. Please let me know ASAP. I will be working on that all day. Read the intro&#039;s last paragraph if your not sure what you should write on. My english writing skills are not perfect so if one of you guys see ways to improved the text, please do. --[[User:Sblais2|Sblais2]] 14:29, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Well I feel like I&#039;m the only one in that team but...Anyway I&#039;ve completed the first 2 sections. Please try working on the next 4.  If you want to modify something, please post a small gist of it in here so we can all validate. Thanks. --[[User:Sblais2|Sblais2]] 20:10, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I am wondering if we have actually split up the work accordingly i am going to a temped to answer Information Maintenance if anyone has dibs please let me know - Csulliva&lt;br /&gt;
I am finding this site that I find very helpfull to understand system called for linux an UNIX &lt;br /&gt;
http://ss64.com/bash/&lt;br /&gt;
-Csulliva&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_2&amp;diff=3344</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_2&amp;diff=3344"/>
		<updated>2010-10-13T20:59:49Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Not in this group and I&#039;m not completely sure if this is relevant but I found that UNIX used the POSIX standard while Linux used LSB which is based on the POSIX standard. &lt;br /&gt;
This article outlines some conflicts between them [https://www.opengroup.org/platform/single_unix_specification/uploads/40/13450/POSIX_and_Linux_Application_Compatibility_Final_-_v1.0.pdf]. I didn&#039;t find the actual comparisons very comprehensible but the ideas are there. --[[User:Slay|Slay]] 15:05, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Uh, where did Figure 1 and much of the current text come from?  It looks like it was cut and pasted from random source.  Please don&#039;t plagarize!  --[[User:Soma|Anil]] (19:24, 8 October 2010 (UTC))&lt;br /&gt;
&lt;br /&gt;
Look into the reference article &amp;quot;Kernel command using Linux system calls&amp;quot;. Plagiarism is not my goal. I&#039;m using my own words to make a simple but complete description of a system call using the interrupt method. Check the references and If you think it is too close, please let me know. It is hard when an author makes such a good and clear description.--[[User:Sblais2|Sblais2]] 21:02, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I thought it would be nice to first describe what is a system calls and the two current methods of doing them. The first is the interrupt method. The second which is used in Linux 2.6.18+ is using the sysenter and sysexit instructions.--[[User:Sblais2|Sblais2]] 19:56, 8 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
You can&#039;t use that figure.  And you can&#039;t copy the text either, even if you change the words slightly.  But really, you&#039;re just wasting your time.  This question is not talking about how system calls are invoked; if you wanted to discuss this, you should be discussing system call invocation mechanisms on the PDP-11 and VAX systems!  Here I&#039;m interested in what are the calls, i.e., kernel functions that can be invoked by a regular program.--[[User:Soma|Anil]]&lt;br /&gt;
&lt;br /&gt;
This link provides about 40 UNIX system calls along with example on where they would be used from the looks of it: [http://www.di.uevora.pt/~lmr/syscalls.html]. --[[User:Apharan2|Apharan2]]&lt;br /&gt;
&lt;br /&gt;
Thank you for clarifying things. I will go that route. --[[User:Sblais2|Sblais2]] 13:08, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I don&#039;t see everyone contributing to this group. Please do let the others in your group know, divide your work into sections and discuss here. If you have questions - ask.--[[User:praman|Preeti]]&lt;br /&gt;
&lt;br /&gt;
This link shows all the system calls from Linux 2.6.33 [http://www.kernel.org/doc/man-pages/online/pages/man2/syscalls.2.html]&lt;br /&gt;
--[[User:Sblais2|Sblais2]] 23:36, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
As no one in our group made suggestion on the format of our essay, I&#039;ve put one in place. In your research, each system calls should fit in one of the category. If someone picks one up. Please let me know ASAP. I will be working on that all day. Read the intro&#039;s last paragraph if your not sure what you should write on. My english writing skills are not perfect so if one of you guys see ways to improved the text, please do. --[[User:Sblais2|Sblais2]] 14:29, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Well I feel like I&#039;m the only one in that team but...Anyway I&#039;ve completed the first 2 sections. Please try working on the next 4.  If you want to modify something, please post a small gist of it in here so we can all validate. Thanks. --[[User:Sblais2|Sblais2]] 20:10, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I am wondering if we have actually split up the work accordingly i am going to a temped to answer Information Maintenance if anyone has dibs please let me know - Csulliva&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3342</id>
		<title>COMP 3000 Essay 1 2010 Question 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_2&amp;diff=3342"/>
		<updated>2010-10-13T20:57:37Z</updated>

		<summary type="html">&lt;p&gt;Csulliva: /* Information Maintenance Calls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
How do the available system calls in modern versions of the Linux Kernel (2.6.30+) compare with the system calls available in the earliest versions of UNIX? How has the system call interface been expanded, and why? Focus on major changes or extensions in functionality.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
A system call is a mean by which programs in the user space can access kernel services. Systems calls vary from operating system to operating system, although the underlying concepts tends to be the same. In general, a process is not supposed to be able to access the kernel directly. It can&#039;y access kernel memory and it can&#039;t call kernel functions. The CPU prevents this (called &amp;quot;protected mode&amp;quot;). System calls are an exception to this rule. For example, older x86 processors used an interrupt mechanism to go from user-space to kernel-space, but newer processor (PentiumII+) provided instructions that optimize this transition (using sysenter and sysexit instructions).  All system calls are small programs built using the C programming language.&lt;br /&gt;
&lt;br /&gt;
The Unix and Linux systems calls are roughly grouped into 6 categories: file management, device management, information maintenance, process control, communications and miscellaneous calls. The miscellaneous calls are all the ones that don’t really fit in the other categories, like system calls dealing with errors. Today, the Unix and Linux operating system contains hundreds of system calls but in general, they all came from the 35 system calls that came with one of the original UNIX OS in the early 70s. In the next paragraphs, we’re going to described the various system calls in each of the categories mentioned above, their evolution through history (major changes in functionality) and a comparison with the earliest versions of UNIX.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File Management Calls==&lt;br /&gt;
&lt;br /&gt;
The systems calls in this group deals with every type of operations that is required to run a file system in the operating system. Create file, delete file, opening a file and closing a file are just a few examples of them and most of them hardly changed throughout the years. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;chmod&#039;&#039;, &#039;&#039;chown&#039;&#039; and &#039;&#039;chdir&#039;&#039; has been available since the earliest UNIX and it is still used in today’s Linux kernels. The &#039;&#039;chmod&#039;&#039; and &#039;&#039;chown&#039;&#039; calls allows the users to change the file attributes and implements security to the file system. The system call &#039;&#039;chdir&#039;&#039; allows the process to change the current working directory. In the 4th distribution of UNIX from Berkeley (4BSD), new system calls where added to be able to give more control on the file system to the applications. The call &#039;&#039;chroot&#039;&#039; allows the process to change the current root directory with one specified in a path argument. &#039;&#039;fchmod&#039;&#039; and &#039;&#039;fchdir&#039;&#039; are the same as &#039;&#039;chmod&#039;&#039; and &#039;&#039;chdir&#039;&#039; except it takes file descriptors as arguments. As of Linux kernel 2.1.81, the &#039;&#039;chown&#039;&#039; system call follows symbolic link and introduce a new system call, &#039;&#039;lchown&#039;&#039;, that does not follow symbolic links. As of Linux 2.6.16, &#039;&#039;fchownat&#039;&#039; and &#039;&#039;fchmodat&#039;&#039; was added. It operates the same way as &#039;&#039;chown&#039;&#039; and &#039;&#039;chmod&#039;&#039; but takes more arguments to deal with relative pathnames. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another four of the original UNIX system calls are &#039;&#039;open&#039;&#039;, &#039;&#039;creat&#039;&#039;, &#039;&#039;mkdir&#039;&#039; and &#039;&#039;close&#039;&#039;. The &#039;&#039;open&#039;&#039; and &#039;&#039;creat&#039;&#039; calls allows processes to open and possibly create a file or device. Arguments flags are used to set either access modes, like O_RDONLY(read-only), to status flags, like O_APPEND(append mode). The only modifications made to the system calls were the addition of status flags where some of them are only linux-specific. The &#039;&#039;close&#039;&#039; call allows processes to close a file descriptor preventing it to be reused. No changes were made to it. &#039;&#039;mkdir&#039;&#039; allows the creation of a file directory. At that point, to delete a directory, users would need to make a series of &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039; system calls. With Unix 4.2BSD, &#039;&#039;rmdir&#039;&#039; and solved the problem. The &#039;&#039;rename&#039;&#039; call was also added in 4.2BSD allowing processes to change the name or the location of a file.As of Linux 2.6.16, &#039;&#039;openat&#039;&#039;, &#039;&#039;mkdirat&#039;&#039; and &#039;&#039;renameat&#039;&#039; was added. It for the same reasons as &#039;&#039;fchmodat&#039;&#039; and &#039;&#039;fchmownat&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There is also system calls used to find, access and modify files. They are ’‘read’’, ‘‘write’’, ‘‘seek’’ and ‘‘stat’’. These were all part of earliest UNIX built. The ‘‘read’’ and ‘‘write’’ system calls allows to read and write from a file (assigned to a file descriptor). The only change was in SVR4 where a write can be interrupted at anytime. The ‘‘seek’’ system calls is used to go to a specified position in a file. This calls used a 16 bit address. But this was replaced very quickly for ‘‘lseek’’ as early as SVR4. It allows the call to use 32 bit addresses. It is still used in modern Linux and Unix systems even if developers are trying to implement ‘‘lseek64’’, a system call that will use 64 bit addresses. The ‘‘stat’’ system calls allows processes to get the status of a file. With SVR4, 2 other version of that system call were created: ‘‘fstat’’ and ‘‘lstat’’. They both do the same thing except ‘‘lstat’’ give the status of symbolic links and ‘‘fstat’’ give the status of a file specified by a file descriptor. Different operating systems will output different values to represent the state of a file. Since kernel 2.5.48, the stat returned a nanoseconds field in the file’s timestamp. The ‘‘fstatat’’ system call was added to Linux kernel 2.6.16. This is again for the same reason as ‘‘openat’’. With the release of 4.4BSD, two new system calls called ‘‘statvfs’’ and ‘‘fstatvfs’’ were introduced to provide information about a mounted file system. They both do the same thing except fstatvfs takes file descriptors as an argument. These calls are only used in an UNIX environment. In Linux, it has ‘‘statfs’’ and ‘‘fstatfs’’ to support that same call.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The last two original UNIX system calls in this category that are still used today are &#039;&#039;link&#039;&#039; and &#039;&#039;unlink&#039;&#039;. &#039;&#039;link&#039;&#039; creates a hard link to an existing file and &#039;&#039;unlink&#039;&#039; deletes a file link’s name and possibly the file it refers to. If the name refers to a symbolic link, only the link is removed. No major changes were done to the &#039;&#039;unlink&#039;&#039; system calls but new system calls were create from &#039;&#039;link&#039;&#039;. The &#039;&#039;symlink&#039;&#039; system call was added in 4.2BSD to allow the creation of symbolic links in the file system. Again in Linux 2.6.16, &#039;&#039;linkat&#039;&#039; and &#039;&#039;unlinkat&#039;&#039; were added for the same reasons as &#039;&#039;openat&#039;&#039;, &#039;&#039;fchmadat&#039;&#039; and &#039;&#039;fchownat&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Device Management Calls==&lt;br /&gt;
&lt;br /&gt;
The device management system calls are linked to hardware and they are mainly used to requests for devices, release devices, to logically attach a device or to detach it, get and modified device attributes and read and write to them. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Two of the most important system calls for the UNIX and Linux operating system is ‘‘mount’’ and ‘‘umount’’. These were among the few system calls available in UNIX in the 70s. The two calls allowed the operating system to load file systems on storage devices. A few changes were done to the mount system calls most of them were the creation of new mount flags to enhance performance. For example, since Linux 2.5.19, the MS_DIRSYNC flag permits the directory changes on a file system synchronous. Another Linux improvement was to provide per-process mount namespaces. This was added on the 2.4.19 kernel. If a process was created using clone() with the CLONE_NEWNS flag, the process will have a new namespace initialized to be a copy of the namespace of the process that was cloned. The ‘‘umount’’ system call unmounted the file system from the storage device. The only noteworthy change to ‘‘umount’’ was the creation of ‘‘umount2’’ in Linux 2.1.116. It is the same as ‘‘umount’’ except it allows different flags to control the operation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ‘‘open’’, ‘‘read’’ and ‘‘write’’ calls can also be used to access devices. As discussed in the previous section, arguments flags are used to better control the device. You would use them as if the devices were files using the appropriate flags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the System V Unix revision 4(SVR4) came the system call ‘‘mmap’’. This system call is used to map or unmap files or devices into memory. Once a device is mapped, the system call returns a pointer to the mapped area allowing processes to access that device. This system call is still used in a Unix environment but since Linux 2.4, Linux replaced it by the mmap2 system call. It is basically the same as mmap except for a final argument specifying the offset into a file in 4096-byte units. This enables the mapping of large files.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In version 7 of Unix, ‘‘ioctl’’ system call is used for device-specific operations that can’t be done using the standard system calls. This helps to deal with a multitude of devices. Each device drivers would provide a set of ioctl request code to allow various operations on their device. Each various request code are hardware dependent so there is no standard available for this system call.&lt;br /&gt;
&lt;br /&gt;
== Information Maintenance Calls==&lt;br /&gt;
    * get/set time or date&lt;br /&gt;
    * get/set system data&lt;br /&gt;
    * get/set process, file, or device attributes&lt;br /&gt;
&lt;br /&gt;
== Process Control Calls==&lt;br /&gt;
&lt;br /&gt;
== Communications Calls==&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous System Calls ==&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
Here is the original manual --[[User:Lmundt|Lmundt]] 18:29, 7 October 2010 (UTC)&lt;br /&gt;
http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html&lt;br /&gt;
&lt;br /&gt;
Linux Programmer&#039;s Manual, Linux man-pages project.&lt;br /&gt;
http://www.kernel.org/doc/man-pages/&lt;/div&gt;</summary>
		<author><name>Csulliva</name></author>
	</entry>
</feed>