<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jslonosky</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jslonosky"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Jslonosky"/>
	<updated>2026-04-22T13:53:14Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6974</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6974"/>
		<updated>2010-12-03T11:35:28Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* General discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other members should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
Just posted other bits that were missing in the backgrounds concepts section like the security uses, models of virtualization and para-virtualization. They&#039;re just a rough version however. I will edit them in the next few hours.I just need to write something for protection rings and that would be it I guess.&lt;br /&gt;
&lt;br /&gt;
I can help with the other sections for the rest of the day, I will try to post some summaries for performance and implementation or even the related work. --[[User:Hesperus|Hesperus]] 07:26, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Guys, we need to get moving here.. The contribution section still needs a lot. We need to talk about their innovations and the things they did there:&lt;br /&gt;
CPU virtualization, Memory virtualization, I/O virtualization and the Macro-optimizations.&lt;br /&gt;
&lt;br /&gt;
I will be posting something regarding this in the next few hours. --[[User:Hesperus|Hesperus]] 22:53, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
I have looked over the paper again and I am wondering about some things.  How are we to critique it?  By their methods, or by the paper itself?&lt;br /&gt;
I find that in the organization of the paper, they give you the links and extra information to look more in depth on such things like the VMC technology, but they almost use that as an excuse for not explaining things in the paper.&lt;br /&gt;
The VMC(0 -&amp;gt;1) annotation that isn&#039;t explained.  I understand what they mean, but it seems that they assume that you already know some things. --JSlonosky 03:03, 2 December 2010 (UTC)&lt;br /&gt;
-----------------&lt;br /&gt;
I think most research papers follow that kind of approach, they vaguely talk about the sideline things and provide references. The VMC technology from what I understood is just a creation of an environment to link or switch between hypervisors. --[[User:Hesperus|Hesperus]] 03:26, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
The instructions say that both style and content can be critiqued. I guess the organization of the paper would fall under style, but i&#039;m not sure how fair it is to critique how much they go in depth on certain things, especially some background stuff. After all, the audience of this paper is people who are already well versed in OS and virtualization stuff. That&#039;s not to say that we shouldn&#039;t bring it up, especially if we feel they don&#039;t sufficiently explain a new technique or notation they are using. &lt;br /&gt;
&lt;br /&gt;
I think it&#039;s also important to remember that our critique will contain things they have done well, not just things they could have done better. Considering that this paper got the best paper award at the largest OS conference, I think it&#039;s safe to say our critique will have many more good things than bad.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s some things they have done well on first inspection, just to get some ideas out there:&lt;br /&gt;
* Solution is extensible to an arbitrary nesting depth without major loss of performance&lt;br /&gt;
* Solution doesn&#039;t depend on modified hardware or software (expect for the lowest level hypervisor) we can reference previous solutions that do require modifications&lt;br /&gt;
* The paper doesn&#039;t ignore virtualizing I/O devices to an arbitrary nesting depth, other techniques do&lt;br /&gt;
* I think the paper does well in laying out the theoretical approach to the problem, as well as demonstrating impressive empirical results.&lt;br /&gt;
&lt;br /&gt;
I&#039;ll have some time to work on this tomorrow, probably clean up the research problem section, maybe kick off the contribution section if no one&#039;s started it, and put up some more extensive stuff for the critique. Let me know what you guys think, i&#039;m off to bed pretty soon, haha! --[[User:Mbingham|Mbingham]] 03:41, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
Okay, thanks for the clear up man. Sounds good.  I&#039;ll see what else I can do in between other work I got to do tonight.&lt;br /&gt;
One thing we should remember is to make sure that our essay clearly answers the question that is directed to it on the exam review.  If we get some other good ideas for questions, we should submit those to Anil as well.&lt;br /&gt;
Questions 1 and 2 relate to our essay, in my mind.&lt;br /&gt;
&amp;quot;What are two uses for nested virtual machines?&lt;br /&gt;
Multi-dimensional page tables are designed to avoid using shadow page tables in nested virtualization. What are shadow page tables, and when must they be used?&amp;quot;&lt;br /&gt;
--JSlonosky 04:47, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Hey guys. The points that Michael mentioned sound pretty great. I think the critique more or less depends on our understanding of the paper, so its not like theres a specific answer or something.&lt;br /&gt;
I will also be seeing the prof tomorrow in his office hours if anyone wants to join me, I will post something here before I go.&lt;br /&gt;
&lt;br /&gt;
The backgrounds section is done. I will keep editing it and filter some of the information. I don&#039;t have a lot of things to do today, so I will spend the whole day working on the paper and editing it and adding the references. I added some sub-sections for the contributions section. The theory part should just talk about the way they&#039;re flattening the levels of virtualization and multiplexing the hardware, I will try to write something for this. Then we go into the CPU, Memory, I/O and optimization. And I can see that someone already handled those things here in the discussion. So we&#039;re pretty much done. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;PS: Guys, please don;t forget about the references. We don&#039;t wanna get into any trouble with the prof in that regard.&#039;&#039;&#039;&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 08:51, 2 December 2010 (UTC)&lt;br /&gt;
------------------&lt;br /&gt;
Alright, I will do some Contribution section today or tonight so no worries. The critique, as i said i had added some stuff there but we still need to debate the good and bad of the design as perceived by our opinions since its a critique we can use first person. &amp;quot;I&amp;quot; and &amp;quot;To me&amp;quot;. --[[User:Praubic|Praubic]] 15:37, 2 December 2010 (UTC)&lt;br /&gt;
-------------------&lt;br /&gt;
Also, if each of us could contribute to the Critique part (here in the discussion) in point form and then we glue it together in concise sentences? We have to get straight to point. We are not aiming for length, rather content as you all know obviously. --[[User:Praubic|Praubic]] 15:53, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
Actually the contributions section is outlined below in the implementation here in the discussion page. So whoever did that should edit it and take it to the main page. I&#039;m ging to the office hours in 2 hours from now to ask the prof a couple of things incluing the critique. --[[User:Hesperus|Hesperus]] 15:58, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
I was just looking over the background concepts section, and had a couple of questions. Firstly, would it be possible to maybe scale the image down and have the text flow around it? Right now it seems to break the &amp;quot;flow&amp;quot; a bit, if that makes sense. Secondly, I think maybe we should think about consolidating some of the sub headings and stuff, I think it breaks the flow of the paper if we have a whole bunch of sub headings that only have a couple of sentences of explanation. Also, I added some stuff to the critique section on the talk page here (right at the bottom). I&#039;ll add some more later. Let me know what you guys think, and let us know how the meeting with Anil goes Hesperus. If I have time I&#039;ll try to come, but i&#039;ve got two other projects on the go right now too, haha. --[[User:Mbingham|Mbingham]] 16:56, 2 December 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
Honestly, I don&#039;t know how to scale down the picture and make the text flow through it but I will try later on tonight to resize it and make it smaller. Regarding the headings, yeah I can do that. I got sort of caught up with a lot of the terms and categorizations. I was even thinking about taking off the multiple-hardware support model, because its briefly mentioned in the paper and its not even available in x86 machines. I will ask the prof about those things. I will be seeing him in 30-40 minutes from now, his office hours start at 1:00 pm. Also if you guys notice any typos or misspellings, don&#039;t worry I will be editing the whole thing tonight. --[[User:Hesperus|Hesperus]] 17:36, 2 December 2010 (UTC)&lt;br /&gt;
--------------------------&lt;br /&gt;
Guys.. whoever did the implementation section below which is basically the contribution, should try to edit it and take it to the main page, I have already provided the headings for the contribution in the main page. I&#039;m currently working on the theory bit in that very same section. --[[User:Hesperus|Hesperus]] 17:43, 2 December 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&lt;br /&gt;
That was Csullvia.  I will go ahead and do it for him if he can&#039;t and has something else to do.--JSlonosky 18:32, 2 December 2010 (UTC)&lt;br /&gt;
-------&lt;br /&gt;
Ok, I didn&#039;t want to edit it myself, because I don&#039;t want to sound repetitive or redundant in my style. The prof should be locking the wiki sometime tomorrow at 7:00 am or 8:00 am, so we better get this finished tonight by 12 or something.&lt;br /&gt;
&lt;br /&gt;
I went and spoke with the prof in his office an hour ago. Regarding the critique, he pointed out a few things that I will be working on in the next few hours like the complexity of their design and whether it would remain efficent when applying multiple levels of virtualization. So I will write something on that, maybe we can combine our points into one paragraph or something.&lt;br /&gt;
&lt;br /&gt;
The headings, he said are fine. But he did mention that the article should make sense or be readable if we remove the headings or the section titles. I will be watching the discussion page frequently for comments and discussion. --[[User:Hesperus|Hesperus]] 20:10, 2 December 2010 (UTC)&lt;br /&gt;
--------&lt;br /&gt;
If you don&#039;t see any update until late at night, don&#039;t worry, I&#039;m coming back to do one final edit and grammar check for the whole article. &lt;br /&gt;
&#039;&#039;&#039;But please guys, if you have used any resources, then don&#039;t forget to add them&#039;&#039;&#039;. --[[User:Hesperus|Hesperus]] 21:31, 2 December 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
Cool. I see that Chris has added the contributions to the main page. I&#039;m currently adding the resources and will be adding a few other things later. --[[User:Hesperus|Hesperus]] 23:33, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Nice, nice I&#039;m currently working on Critique section. Anticipate updates, modify at will. --[[User:Praubic|Praubic]] 23:40, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
I&#039;m just writing out the good copy of another assignment, I should be done in about an hour and can work on whatever needs working on. --[[User:Mbingham|Mbingham]] 23:55, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
I am looking at some of the writing on the main page.  Would you guys mind if I just edit it a bit? Make it sound a bit better?  It&#039;s all your work --JSlonosky 03:37, 3 December 2010 (UTC)&lt;br /&gt;
----&lt;br /&gt;
Hey dude, I&#039;m currently editing the backgrounds concept one final edit. If you want, you can do the other parts. Also, we still need to work on the critique and the theory of operation. I&#039;m staying here for the next 2-3 hours. We need to get this done. --[[User:Hesperus|Hesperus]] 03:58, 3 December 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
I have edited and organized the pictures and figures I created for the backgrounds concepts. The backgrounds concepts needs no further editing as far as I&#039;m concerned. I will be writing something for the theory part of the contributions. As mentioned, I will do one final edit before going to bed! --[[User:Hesperus|Hesperus]] 04:29, 3 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Heya, yeah man I will go over the rest of them.  I looked at the critique, it looks relatively concise.  I might have to read the paper again and see what else I spot. --JSlonosky 05:35, 3 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
I&#039;ve added the resources for the contributions section and linked them with their urls. I&#039;ve also edited a few things here and there. Now one last thing we need to make sure of, does our essay provide answers to the questions the prof posted in the exam review ? I&#039;m sure I covered the uses and purposes of nested virtualization. But the other question needs to be covered or addressed if we haven;t done that yet. The rest of the class will rely on our writings to study this paper for the final. I&#039;m going over the whole thing one last time. --[[User:Hesperus|Hesperus]] 05:56, 3 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
------------&lt;br /&gt;
The CPU, I/0 and Memory virtualizations are extremely messy. So I&#039;m just gonna edit them a bit, I&#039;m not going to change or alter the content at all. --[[User:Hesperus|Hesperus]] 06:46, 3 December 2010 (UTC)&lt;br /&gt;
----&lt;br /&gt;
Alright.   I didn&#039;t find them that bad.  I find that the first question is answered, as well as shadowed page tables.  But I find it somewhat for shadow page tables.  --JSlonosky 07:35, 3 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Yeah sorry I didn&#039;t mean to sound like a douche. I just meant that some of the stuff there could have been a bit better at explaining what are shadow tables ? when are they used and whats so bad about them in terms of performance ?  And how does the new multiple-page tables differ in their design ? Because this is basically what the question in the review asks for. I think what the prof will do is basically read the essays and see how they relate or address the questions he posted, so I just wanted to make sure that we covered that. I edited that section, added the resources that were missing and added few things of my own to make things a bit more clearer. Did one final spelling and grammar check as well. Its good that almost all of us did some of the work here. I feel that our essay is quite readable and comprehensive. Hope we&#039;ll do great. Nice working with you guys.&lt;br /&gt;
&lt;br /&gt;
[http://www.youtube.com/watch?v=6Dh3l2Lv-VY God luck and good speed!]. --[[User:Hesperus|Hesperus]] 11:10, 3 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Haha, no man, you didn&#039;t don&#039;t worry.  It&#039;s good. We should do well.  --JSlonosky 11:35, 3 December 2010 (UTC)&lt;br /&gt;
----&lt;br /&gt;
--JSlonosky 11:35, 3 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work [2]:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work:&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but  they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts.  the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to  bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How they implement the Micro-Optimizations to make it go faster:&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a baremetal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only. They Optimized the transitions between L1 and L2. This  involves an exit to L0 and then an entry. In L0 the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if its being modified and they carefully balance full copying versus partial copying and tracking. The VMCS are optimized further by copying multiple VMCS fields at once. Normally by intel&#039;s specification read or writes must be performed using vmread and vmwrite instruction (operate on a single field). VMCS data can be accessed  without ill side-effects by bypassing&lt;br /&gt;
vmread and vmwrite and copying multiple fields at once with large memory copies. (might not work on processors other than the ones they have tested).The main cause of this slowdown exit handling are additional exits caused by&lt;br /&gt;
privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single l2 exit). By using AMD SVM the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
The paper unequivocally demonstrates strong input in the area of virtualization and data sharing within a single machine. It is aimed at programmers and does not affect the end-user in clearly detectable deviation regarding the usage of applications on top this architecture. Nevertheless contribution is visible with respect to security and compatibility. Since this is first successful implementations of this type that does not modify hardware (there have been half efficient designs), we expect to see increased interest in nested integration model described above.--[[User:Praubic|Praubic]] 23:37, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The framework makes for convenient testing and debugging due to the fact that hypervisors can function inconspicuously towards other nested hypervisors and VMs without being detected. Moreover the efficiency overhead is reduced to 6-10% per level thanks to optimizations such as ommited vmwrites and direct paging (multi level paging technique). &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE&#039;&#039;&#039;: They do mention a masters thesis by Berghmans (its citation 12 in the paper) that, if I understand it right, also talks about software only nested virtualization (they mention it in section 2 as well as in the video), but they claim it is inefficient because only the lowest level hypervisor is able to take advantage of hardware with virtualization support. In the turtles porject solution all levels of hypervisor can take advantage of any present virtualization support. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
* Writing, organization wise: They provide links and resources that can help give explanations to the concepts that they briefly touch upon&lt;br /&gt;
&lt;br /&gt;
* Relatively low performance cost for each level. As mentioned in the video the the team successfully achieved a 6 to 10% performance overhead for each nesting level of OS.&lt;br /&gt;
&lt;br /&gt;
* Thanks to several optimizations the efficiency is greatly improvement to acceptable level&lt;br /&gt;
         - Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions&lt;br /&gt;
         - Optimizing exit handling code and consequently reducing number of exits.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The main drawback is efficiency which appears as the authors introduce additional level of abstraction. The everlasting memory/efficiency dispute continues as nesting virtualization enters our lives. The performance hit is mainly imposed by exponentially generated exits. Furthermore we observed that the paper performs tests at the L2 level, a guest with two hypervisors below it. It might have been useful to understand the limits of nesting if they investgated higher level of nesting such as L4 or L5, just to see what the effect is. Another significant detriment in the paper is that optimizations such as vmread and vm write avoidance are aimed at specific machines (e.i . Intel).&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
* lots of exits cause significant performance cost.&lt;br /&gt;
&lt;br /&gt;
* Writing, Organization wise:  Some concepts, such as the VMC&#039;s, are written such that you should already be familiar with how they work, or read the appropriate references for that section of the research project&lt;br /&gt;
&lt;br /&gt;
* From quickly looking over their results section, it seems their tests are done at the L2 level, a guest with two hypervisors below it. I think it might have been useful to understand the limits of nesting if they did some tests at an even higher level of nesting, L4 or L5 or whatever, just to see what the effect is. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[2]  INTEL CORPORATION. Intel 64 and IA-32 Architectures Software Developers Manual. 2009&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6887</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6887"/>
		<updated>2010-12-03T07:35:18Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* General discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other members should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
Just posted other bits that were missing in the backgrounds concepts section like the security uses, models of virtualization and para-virtualization. They&#039;re just a rough version however. I will edit them in the next few hours.I just need to write something for protection rings and that would be it I guess.&lt;br /&gt;
&lt;br /&gt;
I can help with the other sections for the rest of the day, I will try to post some summaries for performance and implementation or even the related work. --[[User:Hesperus|Hesperus]] 07:26, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Guys, we need to get moving here.. The contribution section still needs a lot. We need to talk about their innovations and the things they did there:&lt;br /&gt;
CPU virtualization, Memory virtualization, I/O virtualization and the Macro-optimizations.&lt;br /&gt;
&lt;br /&gt;
I will be posting something regarding this in the next few hours. --[[User:Hesperus|Hesperus]] 22:53, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
I have looked over the paper again and I am wondering about some things.  How are we to critique it?  By their methods, or by the paper itself?&lt;br /&gt;
I find that in the organization of the paper, they give you the links and extra information to look more in depth on such things like the VMC technology, but they almost use that as an excuse for not explaining things in the paper.&lt;br /&gt;
The VMC(0 -&amp;gt;1) annotation that isn&#039;t explained.  I understand what they mean, but it seems that they assume that you already know some things. --JSlonosky 03:03, 2 December 2010 (UTC)&lt;br /&gt;
-----------------&lt;br /&gt;
I think most research papers follow that kind of approach, they vaguely talk about the sideline things and provide references. The VMC technology from what I understood is just a creation of an environment to link or switch between hypervisors. --[[User:Hesperus|Hesperus]] 03:26, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
The instructions say that both style and content can be critiqued. I guess the organization of the paper would fall under style, but i&#039;m not sure how fair it is to critique how much they go in depth on certain things, especially some background stuff. After all, the audience of this paper is people who are already well versed in OS and virtualization stuff. That&#039;s not to say that we shouldn&#039;t bring it up, especially if we feel they don&#039;t sufficiently explain a new technique or notation they are using. &lt;br /&gt;
&lt;br /&gt;
I think it&#039;s also important to remember that our critique will contain things they have done well, not just things they could have done better. Considering that this paper got the best paper award at the largest OS conference, I think it&#039;s safe to say our critique will have many more good things than bad.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s some things they have done well on first inspection, just to get some ideas out there:&lt;br /&gt;
* Solution is extensible to an arbitrary nesting depth without major loss of performance&lt;br /&gt;
* Solution doesn&#039;t depend on modified hardware or software (expect for the lowest level hypervisor) we can reference previous solutions that do require modifications&lt;br /&gt;
* The paper doesn&#039;t ignore virtualizing I/O devices to an arbitrary nesting depth, other techniques do&lt;br /&gt;
* I think the paper does well in laying out the theoretical approach to the problem, as well as demonstrating impressive empirical results.&lt;br /&gt;
&lt;br /&gt;
I&#039;ll have some time to work on this tomorrow, probably clean up the research problem section, maybe kick off the contribution section if no one&#039;s started it, and put up some more extensive stuff for the critique. Let me know what you guys think, i&#039;m off to bed pretty soon, haha! --[[User:Mbingham|Mbingham]] 03:41, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
Okay, thanks for the clear up man. Sounds good.  I&#039;ll see what else I can do in between other work I got to do tonight.&lt;br /&gt;
One thing we should remember is to make sure that our essay clearly answers the question that is directed to it on the exam review.  If we get some other good ideas for questions, we should submit those to Anil as well.&lt;br /&gt;
Questions 1 and 2 relate to our essay, in my mind.&lt;br /&gt;
&amp;quot;What are two uses for nested virtual machines?&lt;br /&gt;
Multi-dimensional page tables are designed to avoid using shadow page tables in nested virtualization. What are shadow page tables, and when must they be used?&amp;quot;&lt;br /&gt;
--JSlonosky 04:47, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Hey guys. The points that Michael mentioned sound pretty great. I think the critique more or less depends on our understanding of the paper, so its not like theres a specific answer or something.&lt;br /&gt;
I will also be seeing the prof tomorrow in his office hours if anyone wants to join me, I will post something here before I go.&lt;br /&gt;
&lt;br /&gt;
The backgrounds section is done. I will keep editing it and filter some of the information. I don&#039;t have a lot of things to do today, so I will spend the whole day working on the paper and editing it and adding the references. I added some sub-sections for the contributions section. The theory part should just talk about the way they&#039;re flattening the levels of virtualization and multiplexing the hardware, I will try to write something for this. Then we go into the CPU, Memory, I/O and optimization. And I can see that someone already handled those things here in the discussion. So we&#039;re pretty much done. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;PS: Guys, please don;t forget about the references. We don&#039;t wanna get into any trouble with the prof in that regard.&#039;&#039;&#039;&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 08:51, 2 December 2010 (UTC)&lt;br /&gt;
------------------&lt;br /&gt;
Alright, I will do some Contribution section today or tonight so no worries. The critique, as i said i had added some stuff there but we still need to debate the good and bad of the design as perceived by our opinions since its a critique we can use first person. &amp;quot;I&amp;quot; and &amp;quot;To me&amp;quot;. --[[User:Praubic|Praubic]] 15:37, 2 December 2010 (UTC)&lt;br /&gt;
-------------------&lt;br /&gt;
Also, if each of us could contribute to the Critique part (here in the discussion) in point form and then we glue it together in concise sentences? We have to get straight to point. We are not aiming for length, rather content as you all know obviously. --[[User:Praubic|Praubic]] 15:53, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
Actually the contributions section is outlined below in the implementation here in the discussion page. So whoever did that should edit it and take it to the main page. I&#039;m ging to the office hours in 2 hours from now to ask the prof a couple of things incluing the critique. --[[User:Hesperus|Hesperus]] 15:58, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
I was just looking over the background concepts section, and had a couple of questions. Firstly, would it be possible to maybe scale the image down and have the text flow around it? Right now it seems to break the &amp;quot;flow&amp;quot; a bit, if that makes sense. Secondly, I think maybe we should think about consolidating some of the sub headings and stuff, I think it breaks the flow of the paper if we have a whole bunch of sub headings that only have a couple of sentences of explanation. Also, I added some stuff to the critique section on the talk page here (right at the bottom). I&#039;ll add some more later. Let me know what you guys think, and let us know how the meeting with Anil goes Hesperus. If I have time I&#039;ll try to come, but i&#039;ve got two other projects on the go right now too, haha. --[[User:Mbingham|Mbingham]] 16:56, 2 December 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
Honestly, I don&#039;t know how to scale down the picture and make the text flow through it but I will try later on tonight to resize it and make it smaller. Regarding the headings, yeah I can do that. I got sort of caught up with a lot of the terms and categorizations. I was even thinking about taking off the multiple-hardware support model, because its briefly mentioned in the paper and its not even available in x86 machines. I will ask the prof about those things. I will be seeing him in 30-40 minutes from now, his office hours start at 1:00 pm. Also if you guys notice any typos or misspellings, don&#039;t worry I will be editing the whole thing tonight. --[[User:Hesperus|Hesperus]] 17:36, 2 December 2010 (UTC)&lt;br /&gt;
--------------------------&lt;br /&gt;
Guys.. whoever did the implementation section below which is basically the contribution, should try to edit it and take it to the main page, I have already provided the headings for the contribution in the main page. I&#039;m currently working on the theory bit in that very same section. --[[User:Hesperus|Hesperus]] 17:43, 2 December 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&lt;br /&gt;
That was Csullvia.  I will go ahead and do it for him if he can&#039;t and has something else to do.--JSlonosky 18:32, 2 December 2010 (UTC)&lt;br /&gt;
-------&lt;br /&gt;
Ok, I didn&#039;t want to edit it myself, because I don&#039;t want to sound repetitive or redundant in my style. The prof should be locking the wiki sometime tomorrow at 7:00 am or 8:00 am, so we better get this finished tonight by 12 or something.&lt;br /&gt;
&lt;br /&gt;
I went and spoke with the prof in his office an hour ago. Regarding the critique, he pointed out a few things that I will be working on in the next few hours like the complexity of their design and whether it would remain efficent when applying multiple levels of virtualization. So I will write something on that, maybe we can combine our points into one paragraph or something.&lt;br /&gt;
&lt;br /&gt;
The headings, he said are fine. But he did mention that the article should make sense or be readable if we remove the headings or the section titles. I will be watching the discussion page frequently for comments and discussion. --[[User:Hesperus|Hesperus]] 20:10, 2 December 2010 (UTC)&lt;br /&gt;
--------&lt;br /&gt;
If you don&#039;t see any update until late at night, don&#039;t worry, I&#039;m coming back to do one final edit and grammar check for the whole article. &lt;br /&gt;
&#039;&#039;&#039;But please guys, if you have used any resources, then don&#039;t forget to add them&#039;&#039;&#039;. --[[User:Hesperus|Hesperus]] 21:31, 2 December 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
Cool. I see that Chris has added the contributions to the main page. I&#039;m currently adding the resources and will be adding a few other things later. --[[User:Hesperus|Hesperus]] 23:33, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Nice, nice I&#039;m currently working on Critique section. Anticipate updates, modify at will. --[[User:Praubic|Praubic]] 23:40, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
I&#039;m just writing out the good copy of another assignment, I should be done in about an hour and can work on whatever needs working on. --[[User:Mbingham|Mbingham]] 23:55, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
I am looking at some of the writing on the main page.  Would you guys mind if I just edit it a bit? Make it sound a bit better?  It&#039;s all your work --JSlonosky 03:37, 3 December 2010 (UTC)&lt;br /&gt;
----&lt;br /&gt;
Hey dude, I&#039;m currently editing the backgrounds concept one final edit. If you want, you can do the other parts. Also, we still need to work on the critique and the theory of operation. I&#039;m staying here for the next 2-3 hours. We need to get this done. --[[User:Hesperus|Hesperus]] 03:58, 3 December 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
I have edited and organized the pictures and figures I created for the backgrounds concepts. The backgrounds concepts needs no further editing as far as I&#039;m concerned. I will be writing something for the theory part of the contributions. As mentioned, I will do one final edit before going to bed! --[[User:Hesperus|Hesperus]] 04:29, 3 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Heya, yeah man I will go over the rest of them.  I looked at the critique, it looks relatively concise.  I might have to read the paper again and see what else I spot. --JSlonosky 05:35, 3 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
I&#039;ve added the resources for the contributions section and linked them with their urls. I&#039;ve also edited a few things here and there. Now one last thing we need to make sure of, does our essay provide answers to the questions the prof posted in the exam review ? I&#039;m sure I covered the uses and purposes of nested virtualization. But the other question needs to be covered or addressed if we haven;t done that yet. The rest of the class will rely on our writings to study this paper for the final. I&#039;m going over the whole thing one last time. --[[User:Hesperus|Hesperus]] 05:56, 3 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
------------&lt;br /&gt;
The CPU, I/0 and Memory virtualizations are extremely messy. So I&#039;m just gonna edit them a bit, I&#039;m not going to change or alter the content at all. --[[User:Hesperus|Hesperus]] 06:46, 3 December 2010 (UTC)&lt;br /&gt;
----&lt;br /&gt;
Alright.   I didn&#039;t find them that bad.  I find that the first question is answered, as well as shadowed page tables.  But I find it somewhat for shadow page tables.  --JSlonosky 07:35, 3 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work [2]:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work:&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but  they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts.  the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to  bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How they implement the Micro-Optimizations to make it go faster:&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a baremetal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only. They Optimized the transitions between L1 and L2. This  involves an exit to L0 and then an entry. In L0 the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if its being modified and they carefully balance full copying versus partial copying and tracking. The VMCS are optimized further by copying multiple VMCS fields at once. Normally by intel&#039;s specification read or writes must be performed using vmread and vmwrite instruction (operate on a single field). VMCS data can be accessed  without ill side-effects by bypassing&lt;br /&gt;
vmread and vmwrite and copying multiple fields at once with large memory copies. (might not work on processors other than the ones they have tested).The main cause of this slowdown exit handling are additional exits caused by&lt;br /&gt;
privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single l2 exit). By using AMD SVM the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
The paper unequivocally demonstrates strong input in the area of virtualization and data sharing within a single machine. It is aimed at programmers and does not affect the end-user in clearly detectable deviation regarding the usage of applications on top this architecture. Nevertheless contribution is visible with respect to security and compatibility. Since this is first successful implementations of this type that does not modify hardware (there have been half efficient designs), we expect to see increased interest in nested integration model described above.--[[User:Praubic|Praubic]] 23:37, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The framework makes for convenient testing and debugging due to the fact that hypervisors can function inconspicuously towards other nested hypervisors and VMs without being detected. Moreover the efficiency overhead is reduced to 6-10% per level thanks to optimizations such as ommited vmwrites and direct paging (multi level paging technique). &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE&#039;&#039;&#039;: They do mention a masters thesis by Berghmans (its citation 12 in the paper) that, if I understand it right, also talks about software only nested virtualization (they mention it in section 2 as well as in the video), but they claim it is inefficient because only the lowest level hypervisor is able to take advantage of hardware with virtualization support. In the turtles porject solution all levels of hypervisor can take advantage of any present virtualization support. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
* Writing, organization wise: They provide links and resources that can help give explanations to the concepts that they briefly touch upon&lt;br /&gt;
&lt;br /&gt;
* Relatively low performance cost for each level. As mentioned in the video the the team successfully achieved a 6 to 10% performance overhead for each nesting level of OS.&lt;br /&gt;
&lt;br /&gt;
* Thanks to several optimizations the efficiency is greatly improvement to acceptable level&lt;br /&gt;
         - Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions&lt;br /&gt;
         - Optimizing exit handling code and consequently reducing number of exits.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The main drawback is efficiency which appears as the authors introduce additional level of abstraction. The everlasting memory/efficiency dispute continues as nesting virtualization enters our lives. The performance hit is mainly imposed by exponentially generated exits. Furthermore we observed that the paper performs tests at the L2 level, a guest with two hypervisors below it. It might have been useful to understand the limits of nesting if they investgated higher level of nesting such as L4 or L5, just to see what the effect is. Another significant detriment in the paper is that optimizations such as vmread and vm write avoidance are aimed at specific machines (e.i . Intel).&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
* lots of exits cause significant performance cost.&lt;br /&gt;
&lt;br /&gt;
* Writing, Organization wise:  Some concepts, such as the VMC&#039;s, are written such that you should already be familiar with how they work, or read the appropriate references for that section of the research project&lt;br /&gt;
&lt;br /&gt;
* From quickly looking over their results section, it seems their tests are done at the L2 level, a guest with two hypervisors below it. I think it might have been useful to understand the limits of nesting if they did some tests at an even higher level of nesting, L4 or L5 or whatever, just to see what the effect is. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[2]  INTEL CORPORATION. Intel 64 and IA-32 Architectures Software Developers Manual. 2009&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6857</id>
		<title>COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6857"/>
		<updated>2010-12-03T06:22:43Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* Critique */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;&#039;&#039;Go to discussion for group members confirmation, general talk and paper discussions.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Paper=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;big&amp;gt;&amp;lt;big&amp;gt;&#039;&#039;&#039;&amp;quot;The Turtles Project: Design and Implementation of Nested Virtualization&amp;quot;&#039;&#039;&#039;&amp;lt;/big&amp;gt;&amp;lt;/big&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039;&lt;br /&gt;
* Muli Ben-Yehuday +        &lt;br /&gt;
* Michael D. Day ++      &lt;br /&gt;
* Zvi Dubitzky +       &lt;br /&gt;
* Michael Factor +       &lt;br /&gt;
* Nadav Har’El +       &lt;br /&gt;
* Abel Gordon +&lt;br /&gt;
* Anthony Liguori ++&lt;br /&gt;
* Orit Wasserman +&lt;br /&gt;
* Ben-Ami Yassour +&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Research labs:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
+ IBM Research – Haifa&lt;br /&gt;
&lt;br /&gt;
++ IBM Linux Technology Center&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Website:&#039;&#039;&#039; http://www.usenix.org/events/osdi10/tech/full_papers/Ben-Yehuda.pdf&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Video presentation:&#039;&#039;&#039; http://www.usenix.org/multimedia/osdi10ben-yehuda [Note: username and password are required for entry]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
&lt;br /&gt;
Before we delve into the details of our research paper, its essential that we provide some insight and background to the concepts &lt;br /&gt;
and notions discussed by the authors.&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or process to operate on. [1] Usually referred to as a virtual machine, this emulation usually consists of a guest hypervisor and a virtualized environment, giving the guest virtual machine the illusion that its running on the bare hardware. But realistically, the host operating system treats the virtual machine as an application.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used: data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on full-virtualization of hardware within the context of operating systems.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), a hypervisor is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervisor is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may arise due to the interaction of those guests among one another, and with the host hardware and operating system. The hypervisor also has control of host resources without the host truly knowing which resources the VMM are controlled. [2]&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside another virtual machine. For instance, the main operating system hypervisor (L0) can run the virtual machines L1, L2 and L3. In turn, each of those virtual machines is able to run its own virtual machines, and so on (Figure 1). &lt;br /&gt;
[[File:virtualization2.png|thumb|right|Figure 1: Nested virtualization. The guest hypervisor denotes the creation of a virtual machine.|left|400px]]&lt;br /&gt;
&lt;br /&gt;
====Protection rings====&lt;br /&gt;
In modern operating system, there are four levels of access privilge, called Rings, that range from 0 to 3.&lt;br /&gt;
Ring 0 (root mode) is the most privilged level, allowing access to the bare hardware components. The operating system kernel must &lt;br /&gt;
execute in Ring 0 in order to access the hardware and secure control. User programs execute in Ring 3 (guest mode). Ring 1 and Ring 2 are dedicated to device drivers and other operations. [7]&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
Para-virtualization is virtualization model that requires the guest OS kernel to be modified in order to allow the model some direct access to the host hardware. In contrast to full-virtualization that we discussed in the beginning of the article, para-virtualization does not simulate the entire hardware, but rather it relies on a software interface that is implemented in the guest kernel to allow privileged hardware access via special instructions called hypercalls. The advantage here is that there are fewer environment switches and interaction between the guest and host hypervisors, thus more efficiency. However, portability is an obvious issue, since a system can be para-virtualized to be compatible with only one hypervisor. Another thing to note, is that some operating systems such as Windows don&#039;t support para-virtualization. [3]&lt;br /&gt;
&lt;br /&gt;
===x86 models of virtualization===&lt;br /&gt;
&lt;br /&gt;
=====Trap and emulate model=====&lt;br /&gt;
The trap and emualte model is based on the idea that when a guest virtual machine attempts to execute privileged instructions, it triggers a trap or a fault that goes down to level L0, where the host hypervisor resides. Since the host hypervisor is the only one capable of executing privileged instructions at Ring 0, it handles this trap caused by the guest and provides an emulation of the desired instruction to the guest. This way, the guest will gain Ring 0 privilege through the help of the hypervisor. But it is important to know that the guest is unaware of this emulation and it operates as if it was running on the bare hardware.&lt;br /&gt;
&lt;br /&gt;
=====Single-level architecture=====&lt;br /&gt;
The x86 based systems are based on the single-level of architecture virtualization support. In this hardware model, the host hypervisor L0 (running in Ring 0) handles all traps caused by any guest hypervisor running at any level of the virtualization stack. Assume that host hyperrvisor (L0) runs L1. When L1 attempts to run its own virtual machine called L2, this causes a trap that goes down to host hypervisor at level L0, L0 then handles the trap and initiates the required emulation for L1 to create L2. More generally, every trap occuring at level Ln, causes a drop to the L0 level where the host hypervisor resides. The host hypervisor then forwards this trap to the parent of Ln which is Ln-1, which in turn causes the trap to go down to L0 again, and so on. This trap handling switches keeps occurring until the desired emulated result reaches Ln to allow him the privilege to execute (Figure 2).&lt;br /&gt;
&lt;br /&gt;
[[File:single-levelV.png|thumb|right|Figure 2: The single-level architecture support for virtualization that relies on the host hypervisor (L0) to handle every trap caused by a guest.|left|400px]]&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A user can run a particular application or OS that is not compatible with the existing or running OS as a virtual machine. Operating systems could also provide the user with a compatibility mode of other operating systems or applications. An example of this is the Windows XP mode that is available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have the freedom of implementing its own system on the host hardware without worrying about compatibility issues. [9]&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites to host their API and databases on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
We can also use nested virtualization for security purposes. One common example is virtual honeypots. A honeypot is a hollow program or network that appears to be functioning to outside users, but in reality, it is only there as a security tool to watch or trap hacker attacks. By using nested virtualization, we can create a honeypot of our system as virtual machines and see how our virtual system is being attacked or what kind of features are being exploited. We can take advantage of the fact that those virtual honeypots can easily be controlled, manipulated, destroyed or even restored.&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenario where a number of virtual machines must be moved to a new hardware server for upgrade. Instead of moving each VM separately, we can nest those virtual machines and their hypervisors to create one nested entity that is easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents. [8]&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if it is corrupted or damaged, it can easily be removed, recreated or even restored since a snapshot of the running virtual machine can be created and restored.&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
&lt;br /&gt;
Nested virtualization has been studied since the mid 1970s [4]. Early reasearch in the area assumes that there is hardware support for nested virtualization. Actual implementations of nested virtualization, such as the z/VM hypervisor in the early 1990s, also required architectural support. Other solutions assume the hypervisors and operating systems being virtualized have been modified to be compatabile with nested virtualization. There have recently been software based solutions made available [5], however, these solutions suffer from significant performance problems.&lt;br /&gt;
&lt;br /&gt;
The main barrier to having nested virtualization without architectural support is that, as you increase the levels of virtualization, the number of control switches between the different levels of hypervisors increases. A trap in a highly nested virtual machine first goes to the bottom level hypervisor, which can send it up to the second level hypervisor, which can in turn send it up (or back down), until it arrives at potentially the worst case possible, where it reaches the hypervisor that is one level below the virtual machine itself. The trap instruction can be bounced between different levels of hypervisor, which results in one trap instruction multiplying to many trap instructions. &lt;br /&gt;
&lt;br /&gt;
Generally, solutions that require architectural support and specialized software for the guest machines are not practically useful because this support does not always exist, such as on x86 processors. Solutions that do not require this suffer from significant performance costs because of how the number of traps expands as nesting depth increases. This paper presents a technique to reconcile the lack of hardware support on available hardware with efficiency. It is, for the most part, able to contain the problem of a single nested trap expanding into many more trap instructions, at least for the nesting depth the authors considered, which allows efficient virtualization without architectural support.&lt;br /&gt;
&lt;br /&gt;
More specifically, virtualization deals with how to share the resources of the computer between multiple guest operating systems. Nested virtualization must share these resources between multiple guest operating systems and guest hypervisors. The authors acknowledge the CPU, memory, and IO devices as the three key resources that they need to share. Combining this, the paper presents a solution to the problem of how to multiplex the CPU, memory, and IO efficiently between multiple virtual operating systems and hypervisors on a system which has no architectural support for nested virtualization.&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
The non stop evolution of computers entices intricate designs that are virtualized and harmonious with cloud computing. The paper contributes to this belief by allowing consumers and users to inject machines with &#039;&#039;&#039;their&#039;&#039;&#039; choice of hypervisor/OS combination that provides grounds for security and compatibility. The sophisticated abstractions presented in the paper such as shadow paging and isolation of a single OS resources authorize programmers for further development and ideas which use this infrastructure. For example, the paper Accountable Virtual Machines wraps programs around a particular state VM which could be most definitely placed on a separate hypervisor for ideal isolation.&lt;br /&gt;
&lt;br /&gt;
==Theory==&lt;br /&gt;
The fundamental idea of the Turtles Project relies on multiplexing the hardware among the involved guest virtual machines. When a virtual machine like L1 attempts to run L2, this triggers a trap that gets handled by L0, just as we illustrated earlier in the single-hardware architecture model. This trap includes the environment specifications that are needed to run L2 on the bare hardware. L0 converts L2&#039;s virtual memory to L1&#039;s virtual memory to make them run on the same level. Thus, the approach followed ends up flattening the virtualization levels and run them as L0 level virtual machines.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, the following question should be asked: if the host hypervisor ends up running and multiplexing the hardware among the guest virtual machines, then how can we keep track of the virtualization levels? The answer lies within the host hypervisor. By using special control structures like the VMCS and the VMCB, the hypervisor has the ability to diffrentiate between the diffrent levels and keep track of each parent and guest at each level.&lt;br /&gt;
&lt;br /&gt;
==CPU Virtualization==&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the trap because L1 is running as a virtual machine do to the fact that L0 is using the architectural mode for a hypervisor. So in order to have multiplexing run, L2 must execute as a virtual machine of L1. So L0 merges the VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which can cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle. The way it handles a single L2 exit, L1 would need to read and write to the VMCS disable interrupts.  This normally wouldn&#039;t be a problem but because it is running in guest mode as a virtual machine, all the operation traps leading to a signal high level L2 exit or L3 exit causes many exits(more exits equal less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. This process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
==Memory virtualization==&lt;br /&gt;
&lt;br /&gt;
The main idea with n = 2 nest virtualization is that there are three logical translations: L2 to Virtual to physical address, from an L2 physical to L1 physical and form a L1 physical to L0 physical address. There are three levels of translations. However, there is only 2 MMU page tables in the Hardware that called EPT, which takes virtual to physical and guest physical to host physical. They compress the three translations onto the two tables going from the being to end in two jumps instead of three. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely change where the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. This process results in fewer exits.&lt;br /&gt;
&lt;br /&gt;
==I/O virtualization==&lt;br /&gt;
&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O: Device emulation [10], Para-virtualized drivers which knows it on a driver [11][12] and Direct device assignment, [13][14] which results in the best performance. To get the best performance, they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization, they had the many options but they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this, they had to memory map I/O with program I/0 with DMA with interrupts. The idea with DMA is that of each hypervisor L0, L1 needs to use a IOMMU to allow its virtual machine to access the device in order to bypass safety. There is only one plate for IOMMU so L0 needs to emulate an IOMMU. L0 will then compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. The device DMA&#039;s are stored into the L2 memory space directly.&lt;br /&gt;
&lt;br /&gt;
==Macro optimizations==&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower than the same guest running on a bare metal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified, the required changes were found in L0 only. They optimized the transitions between L1 and L2. This involves an exit to L0 and then an entry. In L0, the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if it is being modified.  They carefully balance full copying versus partial copying and tracking. The VMCs are optimized further by copying multiple VMC fields at once. Normally, by intel&#039;s specification read or writes must be performed using the vmread and vmwrite instruction (operate on a single field). VMC&#039;s data can be accessed without the ill side-effects by bypassing vmread and vmwrite and copying multiple fields at once with large memory copies. This may not work on processors other than the ones that were used in testing.  The main cause of this slowdown exit handling are additional exits caused by privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single L2 exit). By using AMD SVM, the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
=== The Pros ===&lt;br /&gt;
&lt;br /&gt;
The paper unequivocally demonstrates strong input in the area of virtualization and data sharing within a single machine. It is aimed at programmers, and should not have too large of an effect on an end user running an application in a nested virtual machine. This is especially true if the user is using the system at a low depth. One can further argue that the most common use cases for nested virtualization that the authors mention in section 1, such as virtualizing OSs that are already hypervisors (like windows 7) and hypervisors in the cloud, will be at a shallow depth. It then follows that the testing the authors do in section 4 covers the most common use cases, so users can expect similar impressive performance. Nevertheless contribution is visible with respect to security and compatibility. On the security side, this nested virtualization technique can be used to study hypervisor level rootkits, such as bluepill [6], by hosting an infected hypervisor as a guest on top of another hypervisor.  Since this is the first successful implementation of this type that does not modify hardware (there have been half decent research designs), we expect to see increased interest in the nested integration model described above. The framework makes for convenient testing and debugging due to the fact that hypervisors can function inconspicuously towards other nested hypervisors and VMs without being detected. Moreover the efficiency overhead is reduced to 6-10% per level thanks to optimizations such as ommited vmwrites and direct paging (multi level paging technique) which sounds very appealing.&lt;br /&gt;
&lt;br /&gt;
=== The Cons ===&lt;br /&gt;
&lt;br /&gt;
The main drawback is efficiency which appears as the authors introduce additional level of abstraction. The everlasting memory/efficiency dispute continues as nesting virtualization enters our lives. The performance hit is mainly imposed by exponentially generated exits which are caused when a nested guest traps, handing control to the lowest level hypervisor, which may hand off the trap to hypervisors above it before finally returning to the guest. So we can see that time-complexity might be an issue when multiple levels of virtualization are involved.&lt;br /&gt;
&lt;br /&gt;
Furthermore we observed that the paper performs tests at the L2 level, a guest with two hypervisors below it. It might have been useful to understand the limits of nesting if they investgated higher level of nesting such as L4 or L5. This is because it can be difficult to predict how the system will react to large levels of nesting, because the increase in the number of traps and other performance killing problems can potentially be exponential as the nesting gets deeper. Another significant detriment is that the paper links to optimizations such as vmread/vmwrite operations avoidance which are aimed at specific CPUs as stated on page 7, section 3.5: &amp;quot;(...) this optimization does not strictly adhere to the VMX specifications, and thus might not work on processors other than the ones we have tested&amp;quot;. This means that some of the techniques the authors use to increase performance are not reproducible on other systems, and so the generality of parts of their solution may be limited.&lt;br /&gt;
&lt;br /&gt;
=== The Style and Presentation ===&lt;br /&gt;
&lt;br /&gt;
The paper presents an elaborate description of the concept of nested virtualization in a very specific manner. It does an excellent job conveying the technical details. The paper does seem to assume a high level of background knowledge and familiarity with the subject, especially with some more technical points of the architecture the hardware uses to implement virtualization. For example, the paragraph 4.1.2 &amp;quot;Impact of Multidimensional paging&amp;quot; attempts to illustrate the technique by an example with terms such as ETP and L1, which may not be familiar to people not used to the technical language. The paper does, however, touch on a wide range of topics in the field of virtualization, including CPU, Memory and IO device virtualization. This wide scope means that many of the major components of virtualization are discussed, so in the process of understanding the paper one learns a lot about many different parts of the field.&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
The research showed in the paper is the first to achieve efficient x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. This is a major improvement over the current available solutions, and the techniques used to achieve nested virtualization are comprehensive and interesting. It also has good potential as a basis for future research. The authors refer to security and clouds as two potential areas for future research, another interesting area could be how the approaches the authors apply, the way they compress multiple levels of abstraction into one level with multi-dimensional paging and device assignment, could be applied to other problems that involve nesting. The paper also won the best paper award at the conference, further reflecting its quality.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] Tanenbaum, Andrew (2007).&#039;&#039; Modern Operating Systems (3rd edition)&#039;&#039;, page 569.&lt;br /&gt;
&lt;br /&gt;
[2] Popek &amp;amp; Goldberg (1974).  [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCkQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.141.4815%26rep%3Drep1%26type%3Dpdf&amp;amp;ei=uxD4TL_OOYeSswbbydzZCA&amp;amp;usg=AFQjCNEavbxNIe4sUwidBvE_3S8MXY3fHg&amp;amp;sig2=BS1tG9eadLRrKVItvb6gBg &#039;&#039;Formal requirements for virtualizable 3rd Generation architecture, section 1: Virtual machine concepts&#039;&#039; ]&lt;br /&gt;
&lt;br /&gt;
[3] Tanenbaum, Andrew (2007). &#039;&#039;Modern Operating Systems (3rd edition)&#039;&#039;, page 574-576.&lt;br /&gt;
&lt;br /&gt;
[4] Goldberg, P. [http://portal.acm.org/citation.cfm?id=800122.803950 Architecture of Virtual Machines]. In &#039;&#039;Proceedings of the Workshop on Virtual Computer Systems&#039;&#039;, ACM pp. 74-112&lt;br /&gt;
&lt;br /&gt;
[5] Berghmans, O. Nesting Virtual Machines in Virtualization Test Frameworks. Master&#039;s Thesis, Unversity of Antwerp, 2010.&lt;br /&gt;
&lt;br /&gt;
[6] Presentation by Joanna Rutkowska, Black Hat Briefings 2006.&lt;br /&gt;
&lt;br /&gt;
[7] Buytaert, Dittner &amp;amp; Rule. &#039;&#039;The best damn server virtualization book period.&#039;&#039; pages 16-18&lt;br /&gt;
&lt;br /&gt;
[8] Clark, Fraser, Hand, Hansen, Jul, Limpach, Pratt &amp;amp; Warfield. &#039;&#039;Live migration of virtual machines&#039;&#039;. page 273-286&lt;br /&gt;
&lt;br /&gt;
[9] Gillam, Lee (2010). &#039;&#039;Cloud Computing: Principles, Systems and Applications&#039;&#039;. page 26-27&lt;br /&gt;
&lt;br /&gt;
[10] Sugerman, Venkitachalm &amp;amp; Lim. (2001). [http://portal.acm.org/citation.cfm?id=1618525.1618534 &#039;&#039;Virtualizing I/O devices on VMware workstation’s hosted virtual machine monitor.&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[11] Russell, Rusty (2008). [http://portal.acm.org/citation.cfm?id=1400097.1400108&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;idx=J597&amp;amp;part=newsletter&amp;amp;WantType=Newsletters&amp;amp;title=ACM%20SIGOPS%20Operating%20Systems%20Review &#039;&#039;virtio: towards a de-facto standard for virtual I/O devices&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[12] Barham, Dragovic, Fraser, Hand, Harris, Ho, Neugebauer, Pratt &amp;amp; Warfield (2003). [http://portal.acm.org/citation.cfm?doid=945445.945462 &#039;&#039;Xen and the art of virtualization&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[13] Levasseur, Uhlig, Stoess &amp;amp; Gotz (2004). [http://portal.acm.org/citation.cfm?id=1251256 &#039;&#039;Unmodified device driver reuse and improved system dependability via virtual machines&#039;&#039;.]&lt;br /&gt;
&lt;br /&gt;
[14] Yassour, Ben-Yehuda &amp;amp; Wasserman (2008). [http://www.mulix.org/misc/hv.pdf D&#039;&#039;irect device for untrsuted fully-virtualized virtual machines&#039;&#039;]&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6856</id>
		<title>COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6856"/>
		<updated>2010-12-03T06:19:57Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* Contribution */  Grammar&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;&#039;&#039;Go to discussion for group members confirmation, general talk and paper discussions.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Paper=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;big&amp;gt;&amp;lt;big&amp;gt;&#039;&#039;&#039;&amp;quot;The Turtles Project: Design and Implementation of Nested Virtualization&amp;quot;&#039;&#039;&#039;&amp;lt;/big&amp;gt;&amp;lt;/big&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039;&lt;br /&gt;
* Muli Ben-Yehuday +        &lt;br /&gt;
* Michael D. Day ++      &lt;br /&gt;
* Zvi Dubitzky +       &lt;br /&gt;
* Michael Factor +       &lt;br /&gt;
* Nadav Har’El +       &lt;br /&gt;
* Abel Gordon +&lt;br /&gt;
* Anthony Liguori ++&lt;br /&gt;
* Orit Wasserman +&lt;br /&gt;
* Ben-Ami Yassour +&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Research labs:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
+ IBM Research – Haifa&lt;br /&gt;
&lt;br /&gt;
++ IBM Linux Technology Center&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Website:&#039;&#039;&#039; http://www.usenix.org/events/osdi10/tech/full_papers/Ben-Yehuda.pdf&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Video presentation:&#039;&#039;&#039; http://www.usenix.org/multimedia/osdi10ben-yehuda [Note: username and password are required for entry]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
&lt;br /&gt;
Before we delve into the details of our research paper, its essential that we provide some insight and background to the concepts &lt;br /&gt;
and notions discussed by the authors.&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or process to operate on. [1] Usually referred to as a virtual machine, this emulation usually consists of a guest hypervisor and a virtualized environment, giving the guest virtual machine the illusion that its running on the bare hardware. But realistically, the host operating system treats the virtual machine as an application.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used: data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on full-virtualization of hardware within the context of operating systems.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), a hypervisor is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervisor is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may arise due to the interaction of those guests among one another, and with the host hardware and operating system. The hypervisor also has control of host resources without the host truly knowing which resources the VMM are controlled. [2]&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside another virtual machine. For instance, the main operating system hypervisor (L0) can run the virtual machines L1, L2 and L3. In turn, each of those virtual machines is able to run its own virtual machines, and so on (Figure 1). &lt;br /&gt;
[[File:virtualization2.png|thumb|right|Figure 1: Nested virtualization. The guest hypervisor denotes the creation of a virtual machine.|left|400px]]&lt;br /&gt;
&lt;br /&gt;
====Protection rings====&lt;br /&gt;
In modern operating system, there are four levels of access privilge, called Rings, that range from 0 to 3.&lt;br /&gt;
Ring 0 (root mode) is the most privilged level, allowing access to the bare hardware components. The operating system kernel must &lt;br /&gt;
execute in Ring 0 in order to access the hardware and secure control. User programs execute in Ring 3 (guest mode). Ring 1 and Ring 2 are dedicated to device drivers and other operations. [7]&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
Para-virtualization is virtualization model that requires the guest OS kernel to be modified in order to allow the model some direct access to the host hardware. In contrast to full-virtualization that we discussed in the beginning of the article, para-virtualization does not simulate the entire hardware, but rather it relies on a software interface that is implemented in the guest kernel to allow privileged hardware access via special instructions called hypercalls. The advantage here is that there are fewer environment switches and interaction between the guest and host hypervisors, thus more efficiency. However, portability is an obvious issue, since a system can be para-virtualized to be compatible with only one hypervisor. Another thing to note, is that some operating systems such as Windows don&#039;t support para-virtualization. [3]&lt;br /&gt;
&lt;br /&gt;
===x86 models of virtualization===&lt;br /&gt;
&lt;br /&gt;
=====Trap and emulate model=====&lt;br /&gt;
The trap and emualte model is based on the idea that when a guest virtual machine attempts to execute privileged instructions, it triggers a trap or a fault that goes down to level L0, where the host hypervisor resides. Since the host hypervisor is the only one capable of executing privileged instructions at Ring 0, it handles this trap caused by the guest and provides an emulation of the desired instruction to the guest. This way, the guest will gain Ring 0 privilege through the help of the hypervisor. But it is important to know that the guest is unaware of this emulation and it operates as if it was running on the bare hardware.&lt;br /&gt;
&lt;br /&gt;
=====Single-level architecture=====&lt;br /&gt;
The x86 based systems are based on the single-level of architecture virtualization support. In this hardware model, the host hypervisor L0 (running in Ring 0) handles all traps caused by any guest hypervisor running at any level of the virtualization stack. Assume that host hyperrvisor (L0) runs L1. When L1 attempts to run its own virtual machine called L2, this causes a trap that goes down to host hypervisor at level L0, L0 then handles the trap and initiates the required emulation for L1 to create L2. More generally, every trap occuring at level Ln, causes a drop to the L0 level where the host hypervisor resides. The host hypervisor then forwards this trap to the parent of Ln which is Ln-1, which in turn causes the trap to go down to L0 again, and so on. This trap handling switches keeps occurring until the desired emulated result reaches Ln to allow him the privilege to execute (Figure 2).&lt;br /&gt;
&lt;br /&gt;
[[File:single-levelV.png|thumb|right|Figure 2: The single-level architecture support for virtualization that relies on the host hypervisor (L0) to handle every trap caused by a guest.|left|400px]]&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A user can run a particular application or OS that is not compatible with the existing or running OS as a virtual machine. Operating systems could also provide the user with a compatibility mode of other operating systems or applications. An example of this is the Windows XP mode that is available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have the freedom of implementing its own system on the host hardware without worrying about compatibility issues. [9]&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites to host their API and databases on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
We can also use nested virtualization for security purposes. One common example is virtual honeypots. A honeypot is a hollow program or network that appears to be functioning to outside users, but in reality, it is only there as a security tool to watch or trap hacker attacks. By using nested virtualization, we can create a honeypot of our system as virtual machines and see how our virtual system is being attacked or what kind of features are being exploited. We can take advantage of the fact that those virtual honeypots can easily be controlled, manipulated, destroyed or even restored.&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenario where a number of virtual machines must be moved to a new hardware server for upgrade. Instead of moving each VM separately, we can nest those virtual machines and their hypervisors to create one nested entity that is easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents. [8]&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if it is corrupted or damaged, it can easily be removed, recreated or even restored since a snapshot of the running virtual machine can be created and restored.&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
&lt;br /&gt;
Nested virtualization has been studied since the mid 1970s [4]. Early reasearch in the area assumes that there is hardware support for nested virtualization. Actual implementations of nested virtualization, such as the z/VM hypervisor in the early 1990s, also required architectural support. Other solutions assume the hypervisors and operating systems being virtualized have been modified to be compatabile with nested virtualization. There have recently been software based solutions made available [5], however, these solutions suffer from significant performance problems.&lt;br /&gt;
&lt;br /&gt;
The main barrier to having nested virtualization without architectural support is that, as you increase the levels of virtualization, the number of control switches between the different levels of hypervisors increases. A trap in a highly nested virtual machine first goes to the bottom level hypervisor, which can send it up to the second level hypervisor, which can in turn send it up (or back down), until it arrives at potentially the worst case possible, where it reaches the hypervisor that is one level below the virtual machine itself. The trap instruction can be bounced between different levels of hypervisor, which results in one trap instruction multiplying to many trap instructions. &lt;br /&gt;
&lt;br /&gt;
Generally, solutions that require architectural support and specialized software for the guest machines are not practically useful because this support does not always exist, such as on x86 processors. Solutions that do not require this suffer from significant performance costs because of how the number of traps expands as nesting depth increases. This paper presents a technique to reconcile the lack of hardware support on available hardware with efficiency. It is, for the most part, able to contain the problem of a single nested trap expanding into many more trap instructions, at least for the nesting depth the authors considered, which allows efficient virtualization without architectural support.&lt;br /&gt;
&lt;br /&gt;
More specifically, virtualization deals with how to share the resources of the computer between multiple guest operating systems. Nested virtualization must share these resources between multiple guest operating systems and guest hypervisors. The authors acknowledge the CPU, memory, and IO devices as the three key resources that they need to share. Combining this, the paper presents a solution to the problem of how to multiplex the CPU, memory, and IO efficiently between multiple virtual operating systems and hypervisors on a system which has no architectural support for nested virtualization.&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
The non stop evolution of computers entices intricate designs that are virtualized and harmonious with cloud computing. The paper contributes to this belief by allowing consumers and users to inject machines with &#039;&#039;&#039;their&#039;&#039;&#039; choice of hypervisor/OS combination that provides grounds for security and compatibility. The sophisticated abstractions presented in the paper such as shadow paging and isolation of a single OS resources authorize programmers for further development and ideas which use this infrastructure. For example, the paper Accountable Virtual Machines wraps programs around a particular state VM which could be most definitely placed on a separate hypervisor for ideal isolation.&lt;br /&gt;
&lt;br /&gt;
==Theory==&lt;br /&gt;
The fundamental idea of the Turtles Project relies on multiplexing the hardware among the involved guest virtual machines. When a virtual machine like L1 attempts to run L2, this triggers a trap that gets handled by L0, just as we illustrated earlier in the single-hardware architecture model. This trap includes the environment specifications that are needed to run L2 on the bare hardware. L0 converts L2&#039;s virtual memory to L1&#039;s virtual memory to make them run on the same level. Thus, the approach followed ends up flattening the virtualization levels and run them as L0 level virtual machines.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, the following question should be asked: if the host hypervisor ends up running and multiplexing the hardware among the guest virtual machines, then how can we keep track of the virtualization levels? The answer lies within the host hypervisor. By using special control structures like the VMCS and the VMCB, the hypervisor has the ability to diffrentiate between the diffrent levels and keep track of each parent and guest at each level.&lt;br /&gt;
&lt;br /&gt;
==CPU Virtualization==&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the trap because L1 is running as a virtual machine do to the fact that L0 is using the architectural mode for a hypervisor. So in order to have multiplexing run, L2 must execute as a virtual machine of L1. So L0 merges the VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which can cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle. The way it handles a single L2 exit, L1 would need to read and write to the VMCS disable interrupts.  This normally wouldn&#039;t be a problem but because it is running in guest mode as a virtual machine, all the operation traps leading to a signal high level L2 exit or L3 exit causes many exits(more exits equal less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. This process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
==Memory virtualization==&lt;br /&gt;
&lt;br /&gt;
The main idea with n = 2 nest virtualization is that there are three logical translations: L2 to Virtual to physical address, from an L2 physical to L1 physical and form a L1 physical to L0 physical address. There are three levels of translations. However, there is only 2 MMU page tables in the Hardware that called EPT, which takes virtual to physical and guest physical to host physical. They compress the three translations onto the two tables going from the being to end in two jumps instead of three. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely change where the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. This process results in fewer exits.&lt;br /&gt;
&lt;br /&gt;
==I/O virtualization==&lt;br /&gt;
&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O: Device emulation [10], Para-virtualized drivers which knows it on a driver [11][12] and Direct device assignment, [13][14] which results in the best performance. To get the best performance, they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization, they had the many options but they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this, they had to memory map I/O with program I/0 with DMA with interrupts. The idea with DMA is that of each hypervisor L0, L1 needs to use a IOMMU to allow its virtual machine to access the device in order to bypass safety. There is only one plate for IOMMU so L0 needs to emulate an IOMMU. L0 will then compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. The device DMA&#039;s are stored into the L2 memory space directly.&lt;br /&gt;
&lt;br /&gt;
==Macro optimizations==&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower than the same guest running on a bare metal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified, the required changes were found in L0 only. They optimized the transitions between L1 and L2. This involves an exit to L0 and then an entry. In L0, the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if it is being modified.  They carefully balance full copying versus partial copying and tracking. The VMCs are optimized further by copying multiple VMC fields at once. Normally, by intel&#039;s specification read or writes must be performed using the vmread and vmwrite instruction (operate on a single field). VMC&#039;s data can be accessed without the ill side-effects by bypassing vmread and vmwrite and copying multiple fields at once with large memory copies. This may not work on processors other than the ones that were used in testing.  The main cause of this slowdown exit handling are additional exits caused by privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single L2 exit). By using AMD SVM, the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
=== The pros ===&lt;br /&gt;
&lt;br /&gt;
The paper unequivocally demonstrates strong input in the area of virtualization and data sharing within a single machine. It is aimed at programmers, and should not have too large of an effect on an end user running an application in a nested virtual machine. This is especially true if the user is using the system at a low depth. One can further argue that the most common use cases for nested virtualization that the authors mention in section 1, such as virtualizing OSs that are already hypervisors (like windows 7) and hypervisors in the cloud, will be at a shallow depth. It then follows that the testing the authors do in section 4 covers the most common use cases, so users can expect similar impressive performance. Nevertheless contribution is visible with respect to security and compatibility. On the security side, this nested virtualization technique can be used to study hypervisor level rootkits, such as bluepill [6], by hosting an infected hypervisor as a guest on top of another hypervisor.  Since this is the first successful implementation of this type that does not modify hardware (there have been half decent research designs), we expect to see increased interest in the nested integration model described above. The framework makes for convenient testing and debugging due to the fact that hypervisors can function inconspicuously towards other nested hypervisors and VMs without being detected. Moreover the efficiency overhead is reduced to 6-10% per level thanks to optimizations such as ommited vmwrites and direct paging (multi level paging technique) which sounds very appealing.&lt;br /&gt;
&lt;br /&gt;
=== The cons ===&lt;br /&gt;
&lt;br /&gt;
The main drawback is efficiency which appears as the authors introduce additional level of abstraction. The everlasting memory/efficiency dispute continues as nesting virtualization enters our lives. The performance hit is mainly imposed by exponentially generated exits which are caused when a nested guest traps, handing control to the lowest level hypervisor, which may hand off the trap to hypervisors above it before finally returning to the guest. So we can see that time-complexity might be an issue when multiple levels of virtualization are involved.&lt;br /&gt;
&lt;br /&gt;
Furthermore we observed that the paper performs tests at the L2 level, a guest with two hypervisors below it. It might have been useful to understand the limits of nesting if they investgated higher level of nesting such as L4 or L5. This is because it can be difficult to predict how the system will react to large levels of nesting, because the increase in the number of traps and other performance killing problems can potentially be exponential as the nesting gets deeper. Another significant detriment is that the paper links to optimizations such as vmread/vmwrite operations avoidance which are aimed at specific CPUs as stated on page 7, section 3.5: &amp;quot;(...) this optimization does not strictly adhere to the VMX specifications, and thus might not work on processors other than the ones we have tested&amp;quot;. This means that some of the techniques the authors use to increase performance are not reproducible on other systems, and so the generality of parts of their solution may be limited.&lt;br /&gt;
&lt;br /&gt;
=== The style and presentation ===&lt;br /&gt;
&lt;br /&gt;
The paper presents an elaborate description of the concept of nested virtualization in a very specific manner. It does a good job conveying the technical details. The paper does seem to assume a high level of background knowledge and familiarity with the subject, especially with some more technical points of the architecture the hardware uses to implement virtualization. For example, the paragraph 4.1.2 &amp;quot;Impact of Multidimensional paging&amp;quot; attempts to illustrate the technique by an example with terms such as ETP and L1, which may not be familiar to people not used to the technical language. The paper does, however, touch on a wide range of topics in the field of virtualization, including CPU, Memory and IO device virtualization. This wide scope means that many of the major components of virtualization are discussed, so in the process of understanding the paper one learns a lot about many different parts of the field.&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
The research showed in the paper is the first to achieve efficient x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. This is a major improvement over the current available solutions, and the techniques used to achieve nested virtualization are comprehensive and interesting. It also has good potential as a basis for future research. The authors refer to security and clouds as two potential areas for future research, another interesting area could be how the approaches the authors apply, the way they compress multiple levels of abstraction into one level with multi-dimensional paging and device assignment, could be applied to other problems that involve nesting. The paper also won the best paper award at the conference, further reflecting its quality.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] Tanenbaum, Andrew (2007).&#039;&#039; Modern Operating Systems (3rd edition)&#039;&#039;, page 569.&lt;br /&gt;
&lt;br /&gt;
[2] Popek &amp;amp; Goldberg (1974).  [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCkQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.141.4815%26rep%3Drep1%26type%3Dpdf&amp;amp;ei=uxD4TL_OOYeSswbbydzZCA&amp;amp;usg=AFQjCNEavbxNIe4sUwidBvE_3S8MXY3fHg&amp;amp;sig2=BS1tG9eadLRrKVItvb6gBg &#039;&#039;Formal requirements for virtualizable 3rd Generation architecture, section 1: Virtual machine concepts&#039;&#039; ]&lt;br /&gt;
&lt;br /&gt;
[3] Tanenbaum, Andrew (2007). &#039;&#039;Modern Operating Systems (3rd edition)&#039;&#039;, page 574-576.&lt;br /&gt;
&lt;br /&gt;
[4] Goldberg, P. [http://portal.acm.org/citation.cfm?id=800122.803950 Architecture of Virtual Machines]. In &#039;&#039;Proceedings of the Workshop on Virtual Computer Systems&#039;&#039;, ACM pp. 74-112&lt;br /&gt;
&lt;br /&gt;
[5] Berghmans, O. Nesting Virtual Machines in Virtualization Test Frameworks. Master&#039;s Thesis, Unversity of Antwerp, 2010.&lt;br /&gt;
&lt;br /&gt;
[6] Presentation by Joanna Rutkowska, Black Hat Briefings 2006.&lt;br /&gt;
&lt;br /&gt;
[7] Buytaert, Dittner &amp;amp; Rule. &#039;&#039;The best damn server virtualization book period.&#039;&#039; pages 16-18&lt;br /&gt;
&lt;br /&gt;
[8] Clark, Fraser, Hand, Hansen, Jul, Limpach, Pratt &amp;amp; Warfield. &#039;&#039;Live migration of virtual machines&#039;&#039;. page 273-286&lt;br /&gt;
&lt;br /&gt;
[9] Gillam, Lee (2010). &#039;&#039;Cloud Computing: Principles, Systems and Applications&#039;&#039;. page 26-27&lt;br /&gt;
&lt;br /&gt;
[10] Sugerman, Venkitachalm &amp;amp; Lim. (2001). [http://portal.acm.org/citation.cfm?id=1618525.1618534 &#039;&#039;Virtualizing I/O devices on VMware workstation’s hosted virtual machine monitor.&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[11] Russell, Rusty (2008). [http://portal.acm.org/citation.cfm?id=1400097.1400108&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;idx=J597&amp;amp;part=newsletter&amp;amp;WantType=Newsletters&amp;amp;title=ACM%20SIGOPS%20Operating%20Systems%20Review &#039;&#039;virtio: towards a de-facto standard for virtual I/O devices&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[12] Barham, Dragovic, Fraser, Hand, Harris, Ho, Neugebauer, Pratt &amp;amp; Warfield (2003). [http://portal.acm.org/citation.cfm?doid=945445.945462 &#039;&#039;Xen and the art of virtualization&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[13] Levasseur, Uhlig, Stoess &amp;amp; Gotz (2004). [http://portal.acm.org/citation.cfm?id=1251256 &#039;&#039;Unmodified device driver reuse and improved system dependability via virtual machines&#039;&#039;.]&lt;br /&gt;
&lt;br /&gt;
[14] Yassour, Ben-Yehuda &amp;amp; Wasserman (2008). [http://www.mulix.org/misc/hv.pdf D&#039;&#039;irect device for untrsuted fully-virtualized virtual machines&#039;&#039;]&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6849</id>
		<title>COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6849"/>
		<updated>2010-12-03T06:01:53Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* Research problem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;&#039;&#039;Go to discussion for group members confirmation, general talk and paper discussions.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Paper=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;big&amp;gt;&amp;lt;big&amp;gt;&#039;&#039;&#039;&amp;quot;The Turtles Project: Design and Implementation of Nested Virtualization&amp;quot;&#039;&#039;&#039;&amp;lt;/big&amp;gt;&amp;lt;/big&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039;&lt;br /&gt;
* Muli Ben-Yehuday +        &lt;br /&gt;
* Michael D. Day ++      &lt;br /&gt;
* Zvi Dubitzky +       &lt;br /&gt;
* Michael Factor +       &lt;br /&gt;
* Nadav Har’El +       &lt;br /&gt;
* Abel Gordon +&lt;br /&gt;
* Anthony Liguori ++&lt;br /&gt;
* Orit Wasserman +&lt;br /&gt;
* Ben-Ami Yassour +&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Research labs:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
+ IBM Research – Haifa&lt;br /&gt;
&lt;br /&gt;
++ IBM Linux Technology Center&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Website:&#039;&#039;&#039; http://www.usenix.org/events/osdi10/tech/full_papers/Ben-Yehuda.pdf&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Video presentation:&#039;&#039;&#039; http://www.usenix.org/multimedia/osdi10ben-yehuda [Note: username and password are required for entry]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
&lt;br /&gt;
Before we delve into the details of our research paper, its essential that we provide some insight and background to the concepts &lt;br /&gt;
and notions discussed by the authors.&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or process to operate on. [1] Usually referred to as a virtual machine, this emulation usually consists of a guest hypervisor and a virtualized environment, giving the guest virtual machine the illusion that its running on the bare hardware. But realistically, the host operating system treats the virtual machine as an application.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used: data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on full-virtualization of hardware within the context of operating systems.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), a hypervisor is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervisor is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may arise due to the interaction of those guests among one another, and with the host hardware and operating system. The hypervisor also has control of host resources without the host truly knowing which resources the VMM are controlled. [2]&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside another virtual machine. For instance, the main operating system hypervisor (L0) can run the virtual machines L1, L2 and L3. In turn, each of those virtual machines is able to run its own virtual machines, and so on (Figure 1). &lt;br /&gt;
[[File:virtualization2.png|thumb|right|Figure 1: Nested virtualization. The guest hypervisor denotes the creation of a virtual machine.|left|400px]]&lt;br /&gt;
&lt;br /&gt;
====Protection rings====&lt;br /&gt;
In modern operating system, there are four levels of access privilge, called Rings, that range from 0 to 3.&lt;br /&gt;
Ring 0 (root mode) is the most privilged level, allowing access to the bare hardware components. The operating system kernel must &lt;br /&gt;
execute in Ring 0 in order to access the hardware and secure control. User programs execute in Ring 3 (guest mode). Ring 1 and Ring 2 are dedicated to device drivers and other operations. [7]&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
Para-virtualization is virtualization model that requires the guest OS kernel to be modified in order to allow the model some direct access to the host hardware. In contrast to full-virtualization that we discussed in the beginning of the article, para-virtualization does not simulate the entire hardware, but rather it relies on a software interface that is implemented in the guest kernel to allow privileged hardware access via special instructions called hypercalls. The advantage here is that there are fewer environment switches and interaction between the guest and host hypervisors, thus more efficiency. However, portability is an obvious issue, since a system can be para-virtualized to be compatible with only one hypervisor. Another thing to note, is that some operating systems such as Windows don&#039;t support para-virtualization. [3]&lt;br /&gt;
&lt;br /&gt;
===x86 models of virtualization===&lt;br /&gt;
&lt;br /&gt;
=====Trap and emulate model=====&lt;br /&gt;
The trap and emualte model is based on the idea that when a guest virtual machine attempts to execute privileged instructions, it triggers a trap or a fault that goes down to level L0, where the host hypervisor resides. Since the host hypervisor is the only one capable of executing privileged instructions at Ring 0, it handles this trap caused by the guest and provides an emulation of the desired instruction to the guest. This way, the guest will gain Ring 0 privilege through the help of the hypervisor. But it is important to know that the guest is unaware of this emulation and it operates as if it was running on the bare hardware.&lt;br /&gt;
&lt;br /&gt;
=====Single-level architecture=====&lt;br /&gt;
The x86 based systems are based on the single-level of architecture virtualization support. In this hardware model, the host hypervisor L0 (running in Ring 0) handles all traps caused by any guest hypervisor running at any level of the virtualization stack. Assume that host hyperrvisor (L0) runs L1. When L1 attempts to run its own virtual machine called L2, this causes a trap that goes down to host hypervisor at level L0, L0 then handles the trap and initiates the required emulation for L1 to create L2. More generally, every trap occuring at level Ln, causes a drop to the L0 level where the host hypervisor resides. The host hypervisor then forwards this trap to the parent of Ln which is Ln-1, which in turn causes the trap to go down to L0 again, and so on. This trap handling switches keeps occurring until the desired emulated result reaches Ln to allow him the privilege to execute (Figure 2).&lt;br /&gt;
&lt;br /&gt;
[[File:single-levelV.png|thumb|right|Figure 2: The single-level architecture support for virtualization that relies on the host hypervisor (L0) to handle every trap caused by a guest.|left|400px]]&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A user can run a particular application or OS that is not compatible with the existing or running OS as a virtual machine. Operating systems could also provide the user with a compatibility mode of other operating systems or applications. An example of this is the Windows XP mode that is available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have the freedom of implementing its own system on the host hardware without worrying about compatibility issues. [9]&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites to host their API and databases on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
We can also use nested virtualization for security purposes. One common example is virtual honeypots. A honeypot is a hollow program or network that appears to be functioning to outside users, but in reality, it is only there as a security tool to watch or trap hacker attacks. By using nested virtualization, we can create a honeypot of our system as virtual machines and see how our virtual system is being attacked or what kind of features are being exploited. We can take advantage of the fact that those virtual honeypots can easily be controlled, manipulated, destroyed or even restored.&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenario where a number of virtual machines must be moved to a new hardware server for upgrade. Instead of moving each VM separately, we can nest those virtual machines and their hypervisors to create one nested entity that is easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents. [8]&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if it is corrupted or damaged, it can easily be removed, recreated or even restored since a snapshot of the running virtual machine can be created and restored.&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
&lt;br /&gt;
Nested virtualization has been studied since the mid 1970s [4]. Early reasearch in the area assumes that there is hardware support for nested virtualization. Actual implementations of nested virtualization, such as the z/VM hypervisor in the early 1990s, also required architectural support. Other solutions assume the hypervisors and operating systems being virtualized have been modified to be compatabile with nested virtualization. There have recently been software based solutions made available [5], however, these solutions suffer from significant performance problems.&lt;br /&gt;
&lt;br /&gt;
The main barrier to having nested virtualization without architectural support is that, as you increase the levels of virtualization, the number of control switches between the different levels of hypervisors increases. A trap in a highly nested virtual machine first goes to the bottom level hypervisor, which can send it up to the second level hypervisor, which can in turn send it up (or back down), until it arrives at potentially the worst case possible, where it reaches the hypervisor that is one level below the virtual machine itself. The trap instruction can be bounced between different levels of hypervisor, which results in one trap instruction multiplying to many trap instructions. &lt;br /&gt;
&lt;br /&gt;
Generally, solutions that require architectural support and specialized software for the guest machines are not practically useful because this support does not always exist, such as on x86 processors. Solutions that do not require this suffer from significant performance costs because of how the number of traps expands as nesting depth increases. This paper presents a technique to reconcile the lack of hardware support on available hardware with efficiency. It is, for the most part, able to contain the problem of a single nested trap expanding into many more trap instructions, at least for the nesting depth the authors considered, which allows efficient virtualization without architectural support.&lt;br /&gt;
&lt;br /&gt;
More specifically, virtualization deals with how to share the resources of the computer between multiple guest operating systems. Nested virtualization must share these resources between multiple guest operating systems and guest hypervisors. The authors acknowledge the CPU, memory, and IO devices as the three key resources that they need to share. Combining this, the paper presents a solution to the problem of how to multiplex the CPU, memory, and IO efficiently between multiple virtual operating systems and hypervisors on a system which has no architectural support for nested virtualization.&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
The non stop evolution of computers entices intricate designs that are virtualized and harmonious with cloud computing. The paper contributes to this belief by allowing consumers and users to inject machines with &#039;&#039;&#039;their&#039;&#039;&#039; choice of hypervisor/OS combination that provides grounds for security and compatibility. The sophisticated abstractions presented in the paper such as shadow paging and isolation of a single OS resources authorize programmers for further development and ideas which use this infrastructure. For example the paper Accountable Virtual Machines wraps programs around a particular state VM which could most definitely be placed on a separate hypervisor for ideal isolation.&lt;br /&gt;
&lt;br /&gt;
==Theory==&lt;br /&gt;
The fundamental idea of the Turtles Project relies on multiplexing the hardware among the involved guest virtual machines. When a virtual machine like L1 attempts to run L2, this triggers a trap that gets handled by L0, just as we illustrated earlier in the single-hardware architecture model. This trap includes the environment specifications that are needed to run L2 on the bare hardware. L0 converts L2&#039;s virtual memory to L1&#039;s virtual memory to make them run on the same level. Thus, the approach followed ends up flattening the virtualization levels and run them as L0 level virtual machines.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, the following question should be asked: if the host hypervisor ends up running and multiplexing the hardware among the guest virtual machines, then how can we keep track of the virtualization levels ? The answer lies within the host hypervisor. By using special control structures like the VMCS and the VMCB, the hypervisor has the ability to diffrentiate between the diffrent levels and keep track of each parent and guest at each level.&lt;br /&gt;
&lt;br /&gt;
==CPU Virtualization==&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle. The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. This Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
==Memory virtualization==&lt;br /&gt;
&lt;br /&gt;
The main idea with n = 2 nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to L1 physical and form a L1 physical to L0 physical address. 3 levels of translations however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical. They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
==I/O virtualization==&lt;br /&gt;
&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation [10], Para-virtualized drivers which know it on a driver [11][12] and Direct device assignment [13][14] which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts. the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
==Macro optimizations==&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a bare metal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only. They Optimized the transitions between L1 and L2. This involves an exit to L0 and then an entry. In L0 the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if its being modified and they carefully balance full copying versus partial copying and tracking. The VMCS are optimized further by copying multiple VMCS fields at once. Normally by intel&#039;s specification read or writes must be performed using vmread and vmwrite instruction (operate on a single field). VMCS data can be accessed without ill side-effects by bypassing vmread and vmwrite and copying multiple fields at once with large memory copies. (might not work on processors other than the ones they have tested).The main cause of this slowdown exit handling are additional exits caused by privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single l2 exit). By using AMD SVM the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
=== The pros ===&lt;br /&gt;
&lt;br /&gt;
The paper unequivocally demonstrates strong input in the area of virtualization and data sharing within a single machine. It is aimed at programmers, and should not have too large of an effect on an end user running an application in a nested virtual machine. This is especially true if the user is using the system at a low depth. One can further argue that the most common use cases for nested virtualization that the authors mention in section 1, such as virtualizing OSs that are already hypervisors (like windows 7) and hypervisors in the cloud, will be at a shallow depth. It then follows that the testing the authors do in section 4 covers the most common use cases, so users can expect similar impressive performance. Nevertheless contribution is visible with respect to security and compatibility. On the security side, this nested virtualization technique can be used to study hypervisor level rootkits, such as bluepill [6], by hosting an infected hypervisor as a guest on top of another hypervisor.  Since this is the first successful implementation of this type that does not modify hardware (there have been half decent research designs), we expect to see increased interest in the nested integration model described above. The framework makes for convenient testing and debugging due to the fact that hypervisors can function inconspicuously towards other nested hypervisors and VMs without being detected. Moreover the efficiency overhead is reduced to 6-10% per level thanks to optimizations such as ommited vmwrites and direct paging (multi level paging technique) which sounds very appealing.&lt;br /&gt;
&lt;br /&gt;
=== The cons ===&lt;br /&gt;
&lt;br /&gt;
The main drawback is efficiency which appears as the authors introduce additional level of abstraction. The everlasting memory/efficiency dispute continues as nesting virtualization enters our lives. The performance hit is mainly imposed by exponentially generated exits which are caused when a nested guest traps, handing control to the lowest level hypervisor, which may hand off the trap to hypervisors above it before finally returning to the guest. So we can see that time-complexity might be an issue when multiple levels of virtualization are involved.&lt;br /&gt;
&lt;br /&gt;
Furthermore we observed that the paper performs tests at the L2 level, a guest with two hypervisors below it. It might have been useful to understand the limits of nesting if they investgated higher level of nesting such as L4 or L5. This is because it can be difficult to predict how the system will react to large levels of nesting, because the increase in the number of traps and other performance killing problems can potentially be exponential as the nesting gets deeper. Another significant detriment is that the paper links to optimizations such as vmread/vmwrite operations avoidance which are aimed at specific CPUs as stated on page 7, section 3.5: &amp;quot;(...) this optimization does not strictly adhere to the VMX specifications, and thus might not work on processors other than the ones we have tested&amp;quot;. This means that some of the techniques the authors use to increase performance are not reproducible on other systems, and so the generality of parts of their solution may be limited.&lt;br /&gt;
&lt;br /&gt;
=== The style and presentation ===&lt;br /&gt;
&lt;br /&gt;
The paper presents an elaborate description of the concept of nested virtualization in a very specific manner. It does a good job conveying the technical details. The paper does seem to assume a high level of background knowledge and familiarity with the subject, especially with some more technical points of the architecture the hardware uses to implement virtualization. For example, the paragraph 4.1.2 &amp;quot;Impact of Multidimensional paging&amp;quot; attempts to illustrate the technique by an example with terms such as ETP and L1, which may not be familiar to people not used to the technical language. The paper does, however, touch on a wide range of topics in the field of virtualization, including CPU, Memory and IO device virtualization. This wide scope means that many of the major components of virtualization are discussed, so in the process of understanding the paper one learns a lot about many different parts of the field.&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
The research showed in the paper is the first to achieve efficient x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. This is a major improvement over the current available solutions, and the techniques used to achieve nested virtualization are comprehensive and interesting. It also has good potential as a basis for future research. The authors refer to security and clouds as two potential areas for future research, another interesting area could be how the approaches the authors apply, the way they compress multiple levels of abstraction into one level with multi-dimensional paging and device assignment, could be applied to other problems that involve nesting. The paper also won the best paper award at the conference, further reflecting its quality.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] Tanenbaum, Andrew (2007).&#039;&#039; Modern Operating Systems (3rd edition)&#039;&#039;, page 569.&lt;br /&gt;
&lt;br /&gt;
[2] Popek &amp;amp; Goldberg (1974).  [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCkQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.141.4815%26rep%3Drep1%26type%3Dpdf&amp;amp;ei=uxD4TL_OOYeSswbbydzZCA&amp;amp;usg=AFQjCNEavbxNIe4sUwidBvE_3S8MXY3fHg&amp;amp;sig2=BS1tG9eadLRrKVItvb6gBg &#039;&#039;Formal requirements for virtualizable 3rd Generation architecture, section 1: Virtual machine concepts&#039;&#039; ]&lt;br /&gt;
&lt;br /&gt;
[3] Tanenbaum, Andrew (2007). &#039;&#039;Modern Operating Systems (3rd edition)&#039;&#039;, page 574-576.&lt;br /&gt;
&lt;br /&gt;
[4] Goldberg, P. [http://portal.acm.org/citation.cfm?id=800122.803950 Architecture of Virtual Machines]. In &#039;&#039;Proceedings of the Workshop on Virtual Computer Systems&#039;&#039;, ACM pp. 74-112&lt;br /&gt;
&lt;br /&gt;
[5] Berghmans, O. Nesting Virtual Machines in Virtualization Test Frameworks. Master&#039;s Thesis, Unversity of Antwerp, 2010.&lt;br /&gt;
&lt;br /&gt;
[6] Presentation by Joanna Rutkowska, Black Hat Briefings 2006.&lt;br /&gt;
&lt;br /&gt;
[7] Buytaert, Dittner &amp;amp; Rule. &#039;&#039;The best damn server virtualization book period.&#039;&#039; pages 16-18&lt;br /&gt;
&lt;br /&gt;
[8] Clark, Fraser, Hand, Hansen, Jul, Limpach, Pratt &amp;amp; Warfield. &#039;&#039;Live migration of virtual machines&#039;&#039;. page 273-286&lt;br /&gt;
&lt;br /&gt;
[9] Gillam, Lee (2010). &#039;&#039;Cloud Computing: Principles, Systems and Applications&#039;&#039;. page 26-27&lt;br /&gt;
&lt;br /&gt;
[10] Sugerman, Venkitachalm &amp;amp; Lim. (2001). [http://portal.acm.org/citation.cfm?id=1618525.1618534 &#039;&#039;Virtualizing I/O devices on VMware workstation’s hosted virtual machine monitor.&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[11] Russell, Rusty (2008). [http://portal.acm.org/citation.cfm?id=1400097.1400108&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;idx=J597&amp;amp;part=newsletter&amp;amp;WantType=Newsletters&amp;amp;title=ACM%20SIGOPS%20Operating%20Systems%20Review &#039;&#039;virtio: towards a de-facto standard for virtual I/O devices&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[12] Barham, Dragovic, Fraser, Hand, Harris, Ho, Neugebauer, Pratt &amp;amp; Warfield (2003). [http://portal.acm.org/citation.cfm?doid=945445.945462 &#039;&#039;Xen and the art of virtualization&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[13] Levasseur, Uhlig, Stoess &amp;amp; Gotz (2004). [http://portal.acm.org/citation.cfm?id=1251256 &#039;&#039;Unmodified device driver reuse and improved system dependability via virtual machines&#039;&#039;.]&lt;br /&gt;
&lt;br /&gt;
[14] Yassour, Ben-Yehuda &amp;amp; Wasserman (2008). [http://www.mulix.org/misc/hv.pdf D&#039;&#039;irect device for untrsuted fully-virtualized virtual machines&#039;&#039;]&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6845</id>
		<title>COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6845"/>
		<updated>2010-12-03T05:56:32Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* The uses of nested virtualization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;&#039;&#039;Go to discussion for group members confirmation, general talk and paper discussions.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Paper=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;big&amp;gt;&amp;lt;big&amp;gt;&#039;&#039;&#039;&amp;quot;The Turtles Project: Design and Implementation of Nested Virtualization&amp;quot;&#039;&#039;&#039;&amp;lt;/big&amp;gt;&amp;lt;/big&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039;&lt;br /&gt;
* Muli Ben-Yehuday +        &lt;br /&gt;
* Michael D. Day ++      &lt;br /&gt;
* Zvi Dubitzky +       &lt;br /&gt;
* Michael Factor +       &lt;br /&gt;
* Nadav Har’El +       &lt;br /&gt;
* Abel Gordon +&lt;br /&gt;
* Anthony Liguori ++&lt;br /&gt;
* Orit Wasserman +&lt;br /&gt;
* Ben-Ami Yassour +&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Research labs:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
+ IBM Research – Haifa&lt;br /&gt;
&lt;br /&gt;
++ IBM Linux Technology Center&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Website:&#039;&#039;&#039; http://www.usenix.org/events/osdi10/tech/full_papers/Ben-Yehuda.pdf&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Video presentation:&#039;&#039;&#039; http://www.usenix.org/multimedia/osdi10ben-yehuda [Note: username and password are required for entry]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
&lt;br /&gt;
Before we delve into the details of our research paper, its essential that we provide some insight and background to the concepts &lt;br /&gt;
and notions discussed by the authors.&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or process to operate on. [1] Usually referred to as a virtual machine, this emulation usually consists of a guest hypervisor and a virtualized environment, giving the guest virtual machine the illusion that its running on the bare hardware. But realistically, the host operating system treats the virtual machine as an application.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used: data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on full-virtualization of hardware within the context of operating systems.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), a hypervisor is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervisor is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may arise due to the interaction of those guests among one another, and with the host hardware and operating system. The hypervisor also has control of host resources without the host truly knowing which resources the VMM are controlled. [2]&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside another virtual machine. For instance, the main operating system hypervisor (L0) can run the virtual machines L1, L2 and L3. In turn, each of those virtual machines is able to run its own virtual machines, and so on (Figure 1). &lt;br /&gt;
[[File:virtualization2.png|thumb|right|Figure 1: Nested virtualization. The guest hypervisor denotes the creation of a virtual machine.|left|400px]]&lt;br /&gt;
&lt;br /&gt;
====Protection rings====&lt;br /&gt;
In modern operating system, there are four levels of access privilge, called Rings, that range from 0 to 3.&lt;br /&gt;
Ring 0 (root mode) is the most privilged level, allowing access to the bare hardware components. The operating system kernel must &lt;br /&gt;
execute in Ring 0 in order to access the hardware and secure control. User programs execute in Ring 3 (guest mode). Ring 1 and Ring 2 are dedicated to device drivers and other operations. [7]&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
Para-virtualization is virtualization model that requires the guest OS kernel to be modified in order to allow the model some direct access to the host hardware. In contrast to full-virtualization that we discussed in the beginning of the article, para-virtualization does not simulate the entire hardware, but rather it relies on a software interface that is implemented in the guest kernel to allow privileged hardware access via special instructions called hypercalls. The advantage here is that there are fewer environment switches and interaction between the guest and host hypervisors, thus more efficiency. However, portability is an obvious issue, since a system can be para-virtualized to be compatible with only one hypervisor. Another thing to note, is that some operating systems such as Windows don&#039;t support para-virtualization. [3]&lt;br /&gt;
&lt;br /&gt;
===x86 models of virtualization===&lt;br /&gt;
&lt;br /&gt;
=====Trap and emulate model=====&lt;br /&gt;
The trap and emualte model is based on the idea that when a guest virtual machine attempts to execute privileged instructions, it triggers a trap or a fault that goes down to level L0, where the host hypervisor resides. Since the host hypervisor is the only one capable of executing privileged instructions at Ring 0, it handles this trap caused by the guest and provides an emulation of the desired instruction to the guest. This way, the guest will gain Ring 0 privilege through the help of the hypervisor. But it is important to know that the guest is unaware of this emulation and it operates as if it was running on the bare hardware.&lt;br /&gt;
&lt;br /&gt;
=====Single-level architecture=====&lt;br /&gt;
The x86 based systems are based on the single-level of architecture virtualization support. In this hardware model, the host hypervisor L0 (running in Ring 0) handles all traps caused by any guest hypervisor running at any level of the virtualization stack. Assume that host hyperrvisor (L0) runs L1. When L1 attempts to run its own virtual machine called L2, this causes a trap that goes down to host hypervisor at level L0, L0 then handles the trap and initiates the required emulation for L1 to create L2. More generally, every trap occuring at level Ln, causes a drop to the L0 level where the host hypervisor resides. The host hypervisor then forwards this trap to the parent of Ln which is Ln-1, which in turn causes the trap to go down to L0 again, and so on. This trap handling switches keeps occurring until the desired emulated result reaches Ln to allow him the privilege to execute (Figure 2).&lt;br /&gt;
&lt;br /&gt;
[[File:single-levelV.png|thumb|right|Figure 2: The single-level architecture support for virtualization that relies on the host hypervisor (L0) to handle every trap caused by a guest.|left|400px]]&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A user can run a particular application or OS that is not compatible with the existing or running OS as a virtual machine. Operating systems could also provide the user with a compatibility mode of other operating systems or applications. An example of this is the Windows XP mode that is available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have the freedom of implementing its own system on the host hardware without worrying about compatibility issues. [9]&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites to host their API and databases on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
We can also use nested virtualization for security purposes. One common example is virtual honeypots. A honeypot is a hollow program or network that appears to be functioning to outside users, but in reality, it is only there as a security tool to watch or trap hacker attacks. By using nested virtualization, we can create a honeypot of our system as virtual machines and see how our virtual system is being attacked or what kind of features are being exploited. We can take advantage of the fact that those virtual honeypots can easily be controlled, manipulated, destroyed or even restored.&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenario where a number of virtual machines must be moved to a new hardware server for upgrade. Instead of moving each VM separately, we can nest those virtual machines and their hypervisors to create one nested entity that is easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents. [8]&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if it is corrupted or damaged, it can easily be removed, recreated or even restored since a snapshot of the running virtual machine can be created and restored.&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
&lt;br /&gt;
Nested virtualization has been studied since the mid 1970s [4]. Early reasearch in the area assumes that there is hardware support for nested virtualization. Actual implementations of nested virtualization, such as the z/VM hypervisor in the early 1990s, also required architectural support. Other solutions assume the hypervisors and operating systems being virtualized have been modified to be compatabile with nested virtualization. There have also recently been software based solutions [5], however these solutions suffer from significant performance problems.&lt;br /&gt;
&lt;br /&gt;
The main barrier to having nested virtualization without architectural support is that, as you increase the levels of virtualization, the numer of control switches between different levels of hypervisors increases. A trap in a highly nested virtual machine first goes to the bottom level hypervisor, which can send it up to the second level hypervisor, which can in turn send it up (or back down), until it potentially in the worst case reaches the hypervisor that is one level below the virtual machine itself. The trap instruction can be bounced between different levels of hypervisor, which results in one trap instruction multiplying to many trap instructions. &lt;br /&gt;
&lt;br /&gt;
Generally, solutions that requie architectural support and specialized software for the guest machines are not practically useful because this support does not always exist, such as on x86 processors. Solutions that do not require this suffer from significant performance costs because of how the number of traps expands as nesting depth increases. This paper presents a technique to reconcile the lack of hardware support on available hardware with efficiency. It is for the most part able to contain the problem of a single nested trap expanding into many more trap instructions, at least for the nesting depth the authors considered, which allows efficient virtualization without architectural support.&lt;br /&gt;
&lt;br /&gt;
More specifically, virtualization deals with how to share the resources of the computer between multiple guest operating systems. Nested virtualization must share these resources between multiple guest operating systems and guest hypervisors. The authors acknowledge the CPU, memory, and IO devices as the three key resources that they need to share. Combining this, the paper presents a solution to the problem of how to multiplex the CPU, memory, and IO efficiently between multiple virtual operating systems and hypervisors on a system which has no architectural support for nested virtualization.&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
The non stop evolution of computers entices intricate designs that are virtualized and harmonious with cloud computing. The paper contributes to this belief by allowing consumers and users to inject machines with &#039;&#039;&#039;their&#039;&#039;&#039; choice of hypervisor/OS combination that provides grounds for security and compatibility. The sophisticated abstractions presented in the paper such as shadow paging and isolation of a single OS resources authorize programmers for further development and ideas which use this infrastructure. For example the paper Accountable Virtual Machines wraps programs around a particular state VM which could most definitely be placed on a separate hypervisor for ideal isolation.&lt;br /&gt;
&lt;br /&gt;
==Theory==&lt;br /&gt;
The fundamental idea of the Turtles Project relies on multiplexing the hardware among the involved guest virtual machines. When a virtual machine like L1 attempts to run L2, this triggers a trap that gets handled by L0, just as we illustrated earlier in the single-hardware architecture model. This trap includes the environment specifications that are needed to run L2 on the bare hardware. L0 converts L2&#039;s virtual memory to L1&#039;s virtual memory to make them run on the same level. Thus, the approach followed ends up flattening the virtualization levels and run them as L0 level virtual machines.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, the following question should be asked: if the host hypervisor ends up running and multiplexing the hardware among the guest virtual machines, then how can we keep track of the virtualization levels ? The answer lies within the host hypervisor. By using special control structures like the VMCS and the VMCB, the hypervisor has the ability to diffrentiate between the diffrent levels and keep track of each parent and guest at each level.&lt;br /&gt;
&lt;br /&gt;
==CPU Virtualization==&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle. The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. This Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
==Memory virtualization==&lt;br /&gt;
&lt;br /&gt;
The main idea with n = 2 nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to L1 physical and form a L1 physical to L0 physical address. 3 levels of translations however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical. They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
==I/O virtualization==&lt;br /&gt;
&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation [10], Para-virtualized drivers which know it on a driver [11][12] and Direct device assignment [13][14] which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts. the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
==Macro optimizations==&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a bare metal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only. They Optimized the transitions between L1 and L2. This involves an exit to L0 and then an entry. In L0 the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if its being modified and they carefully balance full copying versus partial copying and tracking. The VMCS are optimized further by copying multiple VMCS fields at once. Normally by intel&#039;s specification read or writes must be performed using vmread and vmwrite instruction (operate on a single field). VMCS data can be accessed without ill side-effects by bypassing vmread and vmwrite and copying multiple fields at once with large memory copies. (might not work on processors other than the ones they have tested).The main cause of this slowdown exit handling are additional exits caused by privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single l2 exit). By using AMD SVM the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
=== The pros ===&lt;br /&gt;
&lt;br /&gt;
The paper unequivocally demonstrates strong input in the area of virtualization and data sharing within a single machine. It is aimed at programmers, and should not have too large of an effect on an end user running an application in a nested virtual machine. This is especially true if the user is using the system at a low depth. One can further argue that the most common use cases for nested virtualization that the authors mention in section 1, such as virtualizing OSs that are already hypervisors (like windows 7) and hypervisors in the cloud, will be at a shallow depth. It then follows that the testing the authors do in section 4 covers the most common use cases, so users can expect similar impressive performance. Nevertheless contribution is visible with respect to security and compatibility. On the security side, this nested virtualization technique can be used to study hypervisor level rootkits, such as bluepill [6], by hosting an infected hypervisor as a guest on top of another hypervisor.  Since this is the first successful implementation of this type that does not modify hardware (there have been half decent research designs), we expect to see increased interest in the nested integration model described above. The framework makes for convenient testing and debugging due to the fact that hypervisors can function inconspicuously towards other nested hypervisors and VMs without being detected. Moreover the efficiency overhead is reduced to 6-10% per level thanks to optimizations such as ommited vmwrites and direct paging (multi level paging technique) which sounds very appealing.&lt;br /&gt;
&lt;br /&gt;
=== The cons ===&lt;br /&gt;
&lt;br /&gt;
The main drawback is efficiency which appears as the authors introduce additional level of abstraction. The everlasting memory/efficiency dispute continues as nesting virtualization enters our lives. The performance hit is mainly imposed by exponentially generated exits which are caused when a nested guest traps, handing control to the lowest level hypervisor, which may hand off the trap to hypervisors above it before finally returning to the guest. So we can see that time-complexity might be an issue when multiple levels of virtualization are involved.&lt;br /&gt;
&lt;br /&gt;
Furthermore we observed that the paper performs tests at the L2 level, a guest with two hypervisors below it. It might have been useful to understand the limits of nesting if they investgated higher level of nesting such as L4 or L5. This is because it can be difficult to predict how the system will react to large levels of nesting, because the increase in the number of traps and other performance killing problems can potentially be exponential as the nesting gets deeper. Another significant detriment is that the paper links to optimizations such as vmread/vmwrite operations avoidance which are aimed at specific CPUs as stated on page 7, section 3.5: &amp;quot;(...) this optimization does not strictly adhere to the VMX specifications, and thus might not work on processors other than the ones we have tested&amp;quot;. This means that some of the techniques the authors use to increase performance are not reproducible on other systems, and so the generality of parts of their solution may be limited.&lt;br /&gt;
&lt;br /&gt;
=== The style and presentation ===&lt;br /&gt;
&lt;br /&gt;
The paper presents an elaborate description of the concept of nested virtualization in a very specific manner. It does a good job conveying the technical details. The paper does seem to assume a high level of background knowledge and familiarity with the subject, especially with some more technical points of the architecture the hardware uses to implement virtualization. For example, the paragraph 4.1.2 &amp;quot;Impact of Multidimensional paging&amp;quot; attempts to illustrate the technique by an example with terms such as ETP and L1, which may not be familiar to people not used to the technical language. The paper does, however, touch on a wide range of topics in the field of virtualization, including CPU, Memory and IO device virtualization. This wide scope means that many of the major components of virtualization are discussed, so in the process of understanding the paper one learns a lot about many different parts of the field.&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
The research showed in the paper is the first to achieve efficient x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. This is a major improvement over the current available solutions, and the techniques used to achieve nested virtualization are comprehensive and interesting. It also has good potential as a basis for future research. The authors refer to security and clouds as two potential areas for future research, another interesting area could be how the approaches the authors apply, the way they compress multiple levels of abstraction into one level with multi-dimensional paging and device assignment, could be applied to other problems that involve nesting. The paper also won the best paper award at the conference, further reflecting its quality.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] Tanenbaum, Andrew (2007).&#039;&#039; Modern Operating Systems (3rd edition)&#039;&#039;, page 569.&lt;br /&gt;
&lt;br /&gt;
[2] Popek &amp;amp; Goldberg (1974).  [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCkQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.141.4815%26rep%3Drep1%26type%3Dpdf&amp;amp;ei=uxD4TL_OOYeSswbbydzZCA&amp;amp;usg=AFQjCNEavbxNIe4sUwidBvE_3S8MXY3fHg&amp;amp;sig2=BS1tG9eadLRrKVItvb6gBg &#039;&#039;Formal requirements for virtualizable 3rd Generation architecture, section 1: Virtual machine concepts&#039;&#039; ]&lt;br /&gt;
&lt;br /&gt;
[3] Tanenbaum, Andrew (2007). &#039;&#039;Modern Operating Systems (3rd edition)&#039;&#039;, page 574-576.&lt;br /&gt;
&lt;br /&gt;
[4] Goldberg, P. [http://portal.acm.org/citation.cfm?id=800122.803950 Architecture of Virtual Machines]. In &#039;&#039;Proceedings of the Workshop on Virtual Computer Systems&#039;&#039;, ACM pp. 74-112&lt;br /&gt;
&lt;br /&gt;
[5] Berghmans, O. Nesting Virtual Machines in Virtualization Test Frameworks. Master&#039;s Thesis, Unversity of Antwerp, 2010.&lt;br /&gt;
&lt;br /&gt;
[6] Presentation by Joanna Rutkowska, Black Hat Briefings 2006.&lt;br /&gt;
&lt;br /&gt;
[7] Buytaert, Dittner &amp;amp; Rule. &#039;&#039;The best damn server virtualization book period.&#039;&#039; pages 16-18&lt;br /&gt;
&lt;br /&gt;
[8] Clark, Fraser, Hand, Hansen, Jul, Limpach, Pratt &amp;amp; Warfield. &#039;&#039;Live migration of virtual machines&#039;&#039;. page 273-286&lt;br /&gt;
&lt;br /&gt;
[9] Gillam, Lee (2010). &#039;&#039;Cloud Computing: Principles, Systems and Applications&#039;&#039;. page 26-27&lt;br /&gt;
&lt;br /&gt;
[10] Sugerman, Venkitachalm &amp;amp; Lim. (2001). [http://portal.acm.org/citation.cfm?id=1618525.1618534 &#039;&#039;Virtualizing I/O devices on VMware workstation’s hosted virtual machine monitor.&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[11] Russell, Rusty (2008). [http://portal.acm.org/citation.cfm?id=1400097.1400108&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;idx=J597&amp;amp;part=newsletter&amp;amp;WantType=Newsletters&amp;amp;title=ACM%20SIGOPS%20Operating%20Systems%20Review &#039;&#039;virtio: towards a de-facto standard for virtual I/O devices&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[12] Barham, Dragovic, Fraser, Hand, Harris, Ho, Neugebauer, Pratt &amp;amp; Warfield (2003). [http://portal.acm.org/citation.cfm?doid=945445.945462 &#039;&#039;Xen and the art of virtualization&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[13] Levasseur, Uhlig, Stoess &amp;amp; Gotz (2004). [http://portal.acm.org/citation.cfm?id=1251256 &#039;&#039;Unmodified device driver reuse and improved system dependability via virtual machines&#039;&#039;.]&lt;br /&gt;
&lt;br /&gt;
[14] Yassour, Ben-Yehuda &amp;amp; Wasserman (2008). [http://www.mulix.org/misc/hv.pdf D&#039;&#039;irect device for untrsuted fully-virtualized virtual machines&#039;&#039;]&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6841</id>
		<title>COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6841"/>
		<updated>2010-12-03T05:51:42Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* x86 models of virtualization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;&#039;&#039;Go to discussion for group members confirmation, general talk and paper discussions.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Paper=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;big&amp;gt;&amp;lt;big&amp;gt;&#039;&#039;&#039;&amp;quot;The Turtles Project: Design and Implementation of Nested Virtualization&amp;quot;&#039;&#039;&#039;&amp;lt;/big&amp;gt;&amp;lt;/big&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039;&lt;br /&gt;
* Muli Ben-Yehuday +        &lt;br /&gt;
* Michael D. Day ++      &lt;br /&gt;
* Zvi Dubitzky +       &lt;br /&gt;
* Michael Factor +       &lt;br /&gt;
* Nadav Har’El +       &lt;br /&gt;
* Abel Gordon +&lt;br /&gt;
* Anthony Liguori ++&lt;br /&gt;
* Orit Wasserman +&lt;br /&gt;
* Ben-Ami Yassour +&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Research labs:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
+ IBM Research – Haifa&lt;br /&gt;
&lt;br /&gt;
++ IBM Linux Technology Center&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Website:&#039;&#039;&#039; http://www.usenix.org/events/osdi10/tech/full_papers/Ben-Yehuda.pdf&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Video presentation:&#039;&#039;&#039; http://www.usenix.org/multimedia/osdi10ben-yehuda [Note: username and password are required for entry]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
&lt;br /&gt;
Before we delve into the details of our research paper, its essential that we provide some insight and background to the concepts &lt;br /&gt;
and notions discussed by the authors.&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or process to operate on. [1] Usually referred to as a virtual machine, this emulation usually consists of a guest hypervisor and a virtualized environment, giving the guest virtual machine the illusion that its running on the bare hardware. But realistically, the host operating system treats the virtual machine as an application.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used: data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on full-virtualization of hardware within the context of operating systems.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), a hypervisor is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervisor is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may arise due to the interaction of those guests among one another, and with the host hardware and operating system. The hypervisor also has control of host resources without the host truly knowing which resources the VMM are controlled. [2]&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside another virtual machine. For instance, the main operating system hypervisor (L0) can run the virtual machines L1, L2 and L3. In turn, each of those virtual machines is able to run its own virtual machines, and so on (Figure 1). &lt;br /&gt;
[[File:virtualization2.png|thumb|right|Figure 1: Nested virtualization. The guest hypervisor denotes the creation of a virtual machine.|left|400px]]&lt;br /&gt;
&lt;br /&gt;
====Protection rings====&lt;br /&gt;
In modern operating system, there are four levels of access privilge, called Rings, that range from 0 to 3.&lt;br /&gt;
Ring 0 (root mode) is the most privilged level, allowing access to the bare hardware components. The operating system kernel must &lt;br /&gt;
execute in Ring 0 in order to access the hardware and secure control. User programs execute in Ring 3 (guest mode). Ring 1 and Ring 2 are dedicated to device drivers and other operations. [7]&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
Para-virtualization is virtualization model that requires the guest OS kernel to be modified in order to allow the model some direct access to the host hardware. In contrast to full-virtualization that we discussed in the beginning of the article, para-virtualization does not simulate the entire hardware, but rather it relies on a software interface that is implemented in the guest kernel to allow privileged hardware access via special instructions called hypercalls. The advantage here is that there are fewer environment switches and interaction between the guest and host hypervisors, thus more efficiency. However, portability is an obvious issue, since a system can be para-virtualized to be compatible with only one hypervisor. Another thing to note, is that some operating systems such as Windows don&#039;t support para-virtualization. [3]&lt;br /&gt;
&lt;br /&gt;
===x86 models of virtualization===&lt;br /&gt;
&lt;br /&gt;
=====Trap and emulate model=====&lt;br /&gt;
The trap and emualte model is based on the idea that when a guest virtual machine attempts to execute privileged instructions, it triggers a trap or a fault that goes down to level L0, where the host hypervisor resides. Since the host hypervisor is the only one capable of executing privileged instructions at Ring 0, it handles this trap caused by the guest and provides an emulation of the desired instruction to the guest. This way, the guest will gain Ring 0 privilege through the help of the hypervisor. But it is important to know that the guest is unaware of this emulation and it operates as if it was running on the bare hardware.&lt;br /&gt;
&lt;br /&gt;
=====Single-level architecture=====&lt;br /&gt;
The x86 based systems are based on the single-level of architecture virtualization support. In this hardware model, the host hypervisor L0 (running in Ring 0) handles all traps caused by any guest hypervisor running at any level of the virtualization stack. Assume that host hyperrvisor (L0) runs L1. When L1 attempts to run its own virtual machine called L2, this causes a trap that goes down to host hypervisor at level L0, L0 then handles the trap and initiates the required emulation for L1 to create L2. More generally, every trap occuring at level Ln, causes a drop to the L0 level where the host hypervisor resides. The host hypervisor then forwards this trap to the parent of Ln which is Ln-1, which in turn causes the trap to go down to L0 again, and so on. This trap handling switches keeps occurring until the desired emulated result reaches Ln to allow him the privilege to execute (Figure 2).&lt;br /&gt;
&lt;br /&gt;
[[File:single-levelV.png|thumb|right|Figure 2: The single-level architecture support for virtualization that relies on the host hypervisor (L0) to handle every trap caused by a guest.|left|400px]]&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A user can run a particular application or OS thats not compatible with the existing or running OS as a virtual machine. Operating systems could also provide the user a compatibily mode of other operating systems or applications, an example of this is the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues. [9]&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites to host their API and databases on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
We can also use nested virtualization for security purposes. One common example is virtual honeypots. A honeypot is basically a hollow program or network that appears to be functioning to outside users, but in reality, its only there as a security tool to watch or trap hacker attacks. By using nested virtualization, we can create a honeypot of our system as virtual machines and see how our virtual system is being attacked or what kind of features are being exploited. We can take advantage of the fact that those virtual honeypots can easily be controlled, manipulated, destroyed or even restored.&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents. [8]&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
&lt;br /&gt;
Nested virtualization has been studied since the mid 1970s [4]. Early reasearch in the area assumes that there is hardware support for nested virtualization. Actual implementations of nested virtualization, such as the z/VM hypervisor in the early 1990s, also required architectural support. Other solutions assume the hypervisors and operating systems being virtualized have been modified to be compatabile with nested virtualization. There have also recently been software based solutions [5], however these solutions suffer from significant performance problems.&lt;br /&gt;
&lt;br /&gt;
The main barrier to having nested virtualization without architectural support is that, as you increase the levels of virtualization, the numer of control switches between different levels of hypervisors increases. A trap in a highly nested virtual machine first goes to the bottom level hypervisor, which can send it up to the second level hypervisor, which can in turn send it up (or back down), until it potentially in the worst case reaches the hypervisor that is one level below the virtual machine itself. The trap instruction can be bounced between different levels of hypervisor, which results in one trap instruction multiplying to many trap instructions. &lt;br /&gt;
&lt;br /&gt;
Generally, solutions that requie architectural support and specialized software for the guest machines are not practically useful because this support does not always exist, such as on x86 processors. Solutions that do not require this suffer from significant performance costs because of how the number of traps expands as nesting depth increases. This paper presents a technique to reconcile the lack of hardware support on available hardware with efficiency. It is for the most part able to contain the problem of a single nested trap expanding into many more trap instructions, at least for the nesting depth the authors considered, which allows efficient virtualization without architectural support.&lt;br /&gt;
&lt;br /&gt;
More specifically, virtualization deals with how to share the resources of the computer between multiple guest operating systems. Nested virtualization must share these resources between multiple guest operating systems and guest hypervisors. The authors acknowledge the CPU, memory, and IO devices as the three key resources that they need to share. Combining this, the paper presents a solution to the problem of how to multiplex the CPU, memory, and IO efficiently between multiple virtual operating systems and hypervisors on a system which has no architectural support for nested virtualization.&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
The non stop evolution of computers entices intricate designs that are virtualized and harmonious with cloud computing. The paper contributes to this belief by allowing consumers and users to inject machines with &#039;&#039;&#039;their&#039;&#039;&#039; choice of hypervisor/OS combination that provides grounds for security and compatibility. The sophisticated abstractions presented in the paper such as shadow paging and isolation of a single OS resources authorize programmers for further development and ideas which use this infrastructure. For example the paper Accountable Virtual Machines wraps programs around a particular state VM which could most definitely be placed on a separate hypervisor for ideal isolation.&lt;br /&gt;
&lt;br /&gt;
==Theory==&lt;br /&gt;
The fundamental idea of the Turtles Project relies on multiplexing the hardware among the involved guest virtual machines. When a virtual machine like L1 attempts to run L2, this triggers a trap that gets handled by L0, just as we illustrated earlier in the single-hardware architecture model. This trap includes the environment specifications that are needed to run L2 on the bare hardware. L0 converts L2&#039;s virtual memory to L1&#039;s virtual memory to make them run on the same level. Thus, the approach followed ends up flattening the virtualization levels and run them as L0 level virtual machines.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, the following question should be asked: if the host hypervisor ends up running and multiplexing the hardware among the guest virtual machines, then how can we keep track of the virtualization levels ? The answer lies within the host hypervisor. By using special control structures like the VMCS and the VMCB, the hypervisor has the ability to diffrentiate between the diffrent levels and keep track of each parent and guest at each level.&lt;br /&gt;
&lt;br /&gt;
==CPU Virtualization==&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle. The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. This Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
==Memory virtualization==&lt;br /&gt;
&lt;br /&gt;
The main idea with n = 2 nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to L1 physical and form a L1 physical to L0 physical address. 3 levels of translations however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical. They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
==I/O virtualization==&lt;br /&gt;
&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation [10], Para-virtualized drivers which know it on a driver [11][12] and Direct device assignment [13][14] which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts. the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
==Macro optimizations==&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a bare metal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only. They Optimized the transitions between L1 and L2. This involves an exit to L0 and then an entry. In L0 the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if its being modified and they carefully balance full copying versus partial copying and tracking. The VMCS are optimized further by copying multiple VMCS fields at once. Normally by intel&#039;s specification read or writes must be performed using vmread and vmwrite instruction (operate on a single field). VMCS data can be accessed without ill side-effects by bypassing vmread and vmwrite and copying multiple fields at once with large memory copies. (might not work on processors other than the ones they have tested).The main cause of this slowdown exit handling are additional exits caused by privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single l2 exit). By using AMD SVM the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
=== The pros ===&lt;br /&gt;
&lt;br /&gt;
The paper unequivocally demonstrates strong input in the area of virtualization and data sharing within a single machine. It is aimed at programmers, and should not have too large of an effect on an end user running an application in a nested virtual machine. This is especially true if the user is using the system at a low depth. One can further argue that the most common use cases for nested virtualization that the authors mention in section 1, such as virtualizing OSs that are already hypervisors (like windows 7) and hypervisors in the cloud, will be at a shallow depth. It then follows that the testing the authors do in section 4 covers the most common use cases, so users can expect similar impressive performance. Nevertheless contribution is visible with respect to security and compatibility. On the security side, this nested virtualization technique can be used to study hypervisor level rootkits, such as bluepill [6], by hosting an infected hypervisor as a guest on top of another hypervisor.  Since this is the first successful implementation of this type that does not modify hardware (there have been half decent research designs), we expect to see increased interest in the nested integration model described above. The framework makes for convenient testing and debugging due to the fact that hypervisors can function inconspicuously towards other nested hypervisors and VMs without being detected. Moreover the efficiency overhead is reduced to 6-10% per level thanks to optimizations such as ommited vmwrites and direct paging (multi level paging technique) which sounds very appealing.&lt;br /&gt;
&lt;br /&gt;
=== The cons ===&lt;br /&gt;
&lt;br /&gt;
The main drawback is efficiency which appears as the authors introduce additional level of abstraction. The everlasting memory/efficiency dispute continues as nesting virtualization enters our lives. The performance hit is mainly imposed by exponentially generated exits which are caused when a nested guest traps, handing control to the lowest level hypervisor, which may hand off the trap to hypervisors above it before finally returning to the guest. So we can see that time-complexity might be an issue when multiple levels of virtualization are involved.&lt;br /&gt;
&lt;br /&gt;
Furthermore we observed that the paper performs tests at the L2 level, a guest with two hypervisors below it. It might have been useful to understand the limits of nesting if they investgated higher level of nesting such as L4 or L5. This is because it can be difficult to predict how the system will react to large levels of nesting, because the increase in the number of traps and other performance killing problems can potentially be exponential as the nesting gets deeper. Another significant detriment is that the paper links to optimizations such as vmread/vmwrite operations avoidance which are aimed at specific CPUs as stated on page 7, section 3.5: &amp;quot;(...) this optimization does not strictly adhere to the VMX specifications, and thus might not work on processors other than the ones we have tested&amp;quot;. This means that some of the techniques the authors use to increase performance are not reproducible on other systems, and so the generality of parts of their solution may be limited.&lt;br /&gt;
&lt;br /&gt;
=== The style and presentation ===&lt;br /&gt;
&lt;br /&gt;
The paper presents an elaborate description of the concept of nested virtualization in a very specific manner. It does a good job conveying the technical details. The paper does seem to assume a high level of background knowledge and familiarity with the subject, especially with some more technical points of the architecture the hardware uses to implement virtualization. For example, the paragraph 4.1.2 &amp;quot;Impact of Multidimensional paging&amp;quot; attempts to illustrate the technique by an example with terms such as ETP and L1, which may not be familiar to people not used to the technical language. The paper does, however, touch on a wide range of topics in the field of virtualization, including CPU, Memory and IO device virtualization. This wide scope means that many of the major components of virtualization are discussed, so in the process of understanding the paper one learns a lot about many different parts of the field.&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
The research showed in the paper is the first to achieve efficient x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. This is a major improvement over the current available solutions, and the techniques used to achieve nested virtualization are comprehensive and interesting. It also has good potential as a basis for future research. The authors refer to security and clouds as two potential areas for future research, another interesting area could be how the approaches the authors apply, the way they compress multiple levels of abstraction into one level with multi-dimensional paging and device assignment, could be applied to other problems that involve nesting. The paper also won the best paper award at the conference, further reflecting its quality.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] Tanenbaum, Andrew (2007).&#039;&#039; Modern Operating Systems (3rd edition)&#039;&#039;, page 569.&lt;br /&gt;
&lt;br /&gt;
[2] Popek &amp;amp; Goldberg (1974).  [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCkQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.141.4815%26rep%3Drep1%26type%3Dpdf&amp;amp;ei=uxD4TL_OOYeSswbbydzZCA&amp;amp;usg=AFQjCNEavbxNIe4sUwidBvE_3S8MXY3fHg&amp;amp;sig2=BS1tG9eadLRrKVItvb6gBg &#039;&#039;Formal requirements for virtualizable 3rd Generation architecture, section 1: Virtual machine concepts&#039;&#039; ]&lt;br /&gt;
&lt;br /&gt;
[3] Tanenbaum, Andrew (2007). &#039;&#039;Modern Operating Systems (3rd edition)&#039;&#039;, page 574-576.&lt;br /&gt;
&lt;br /&gt;
[4] Goldberg, P. [http://portal.acm.org/citation.cfm?id=800122.803950 Architecture of Virtual Machines]. In &#039;&#039;Proceedings of the Workshop on Virtual Computer Systems&#039;&#039;, ACM pp. 74-112&lt;br /&gt;
&lt;br /&gt;
[5] Berghmans, O. Nesting Virtual Machines in Virtualization Test Frameworks. Master&#039;s Thesis, Unversity of Antwerp, 2010.&lt;br /&gt;
&lt;br /&gt;
[6] Presentation by Joanna Rutkowska, Black Hat Briefings 2006.&lt;br /&gt;
&lt;br /&gt;
[7] Buytaert, Dittner &amp;amp; Rule. &#039;&#039;The best damn server virtualization book period.&#039;&#039; pages 16-18&lt;br /&gt;
&lt;br /&gt;
[8] Clark, Fraser, Hand, Hansen, Jul, Limpach, Pratt &amp;amp; Warfield. &#039;&#039;Live migration of virtual machines&#039;&#039;. page 273-286&lt;br /&gt;
&lt;br /&gt;
[9] Gillam, Lee (2010). &#039;&#039;Cloud Computing: Principles, Systems and Applications&#039;&#039;. page 26-27&lt;br /&gt;
&lt;br /&gt;
[10] Sugerman, Venkitachalm &amp;amp; Lim. (2001). [http://portal.acm.org/citation.cfm?id=1618525.1618534 &#039;&#039;Virtualizing I/O devices on VMware workstation’s hosted virtual machine monitor.&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[11] Russell, Rusty (2008). [http://portal.acm.org/citation.cfm?id=1400097.1400108&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;idx=J597&amp;amp;part=newsletter&amp;amp;WantType=Newsletters&amp;amp;title=ACM%20SIGOPS%20Operating%20Systems%20Review &#039;&#039;virtio: towards a de-facto standard for virtual I/O devices&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[12] Barham, Dragovic, Fraser, Hand, Harris, Ho, Neugebauer, Pratt &amp;amp; Warfield (2003). [http://portal.acm.org/citation.cfm?doid=945445.945462 &#039;&#039;Xen and the art of virtualization&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[13] Levasseur, Uhlig, Stoess &amp;amp; Gotz (2004). [http://portal.acm.org/citation.cfm?id=1251256 &#039;&#039;Unmodified device driver reuse and improved system dependability via virtual machines&#039;&#039;.]&lt;br /&gt;
&lt;br /&gt;
[14] Yassour, Ben-Yehuda &amp;amp; Wasserman (2008). [http://www.mulix.org/misc/hv.pdf D&#039;&#039;irect device for untrsuted fully-virtualized virtual machines&#039;&#039;]&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6840</id>
		<title>COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_2_2010_Question_9&amp;diff=6840"/>
		<updated>2010-12-03T05:50:12Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* Virtualization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;&#039;&#039;Go to discussion for group members confirmation, general talk and paper discussions.&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Paper=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;big&amp;gt;&amp;lt;big&amp;gt;&#039;&#039;&#039;&amp;quot;The Turtles Project: Design and Implementation of Nested Virtualization&amp;quot;&#039;&#039;&#039;&amp;lt;/big&amp;gt;&amp;lt;/big&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Authors:&#039;&#039;&#039;&lt;br /&gt;
* Muli Ben-Yehuday +        &lt;br /&gt;
* Michael D. Day ++      &lt;br /&gt;
* Zvi Dubitzky +       &lt;br /&gt;
* Michael Factor +       &lt;br /&gt;
* Nadav Har’El +       &lt;br /&gt;
* Abel Gordon +&lt;br /&gt;
* Anthony Liguori ++&lt;br /&gt;
* Orit Wasserman +&lt;br /&gt;
* Ben-Ami Yassour +&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Research labs:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
+ IBM Research – Haifa&lt;br /&gt;
&lt;br /&gt;
++ IBM Linux Technology Center&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Website:&#039;&#039;&#039; http://www.usenix.org/events/osdi10/tech/full_papers/Ben-Yehuda.pdf&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Video presentation:&#039;&#039;&#039; http://www.usenix.org/multimedia/osdi10ben-yehuda [Note: username and password are required for entry]&lt;br /&gt;
    &lt;br /&gt;
&lt;br /&gt;
=Background Concepts=&lt;br /&gt;
&lt;br /&gt;
Before we delve into the details of our research paper, its essential that we provide some insight and background to the concepts &lt;br /&gt;
and notions discussed by the authors.&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or process to operate on. [1] Usually referred to as a virtual machine, this emulation usually consists of a guest hypervisor and a virtualized environment, giving the guest virtual machine the illusion that its running on the bare hardware. But realistically, the host operating system treats the virtual machine as an application.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used: data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on full-virtualization of hardware within the context of operating systems.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), a hypervisor is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervisor is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may arise due to the interaction of those guests among one another, and with the host hardware and operating system. The hypervisor also has control of host resources without the host truly knowing which resources the VMM are controlled. [2]&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside another virtual machine. For instance, the main operating system hypervisor (L0) can run the virtual machines L1, L2 and L3. In turn, each of those virtual machines is able to run its own virtual machines, and so on (Figure 1). &lt;br /&gt;
[[File:virtualization2.png|thumb|right|Figure 1: Nested virtualization. The guest hypervisor denotes the creation of a virtual machine.|left|400px]]&lt;br /&gt;
&lt;br /&gt;
====Protection rings====&lt;br /&gt;
In modern operating system, there are four levels of access privilge, called Rings, that range from 0 to 3.&lt;br /&gt;
Ring 0 (root mode) is the most privilged level, allowing access to the bare hardware components. The operating system kernel must &lt;br /&gt;
execute in Ring 0 in order to access the hardware and secure control. User programs execute in Ring 3 (guest mode). Ring 1 and Ring 2 are dedicated to device drivers and other operations. [7]&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
Para-virtualization is virtualization model that requires the guest OS kernel to be modified in order to allow the model some direct access to the host hardware. In contrast to full-virtualization that we discussed in the beginning of the article, para-virtualization does not simulate the entire hardware, but rather it relies on a software interface that is implemented in the guest kernel to allow privileged hardware access via special instructions called hypercalls. The advantage here is that there are fewer environment switches and interaction between the guest and host hypervisors, thus more efficiency. However, portability is an obvious issue, since a system can be para-virtualized to be compatible with only one hypervisor. Another thing to note, is that some operating systems such as Windows don&#039;t support para-virtualization. [3]&lt;br /&gt;
&lt;br /&gt;
===x86 models of virtualization===&lt;br /&gt;
&lt;br /&gt;
=====Trap and emulate model=====&lt;br /&gt;
The trap and emualte model is based on the idea that when a guest virtual machine attempts to execute privileged instructions, it triggers a trap or a fault that goes down to level L0, where the host hypervisor resides. Since the host hypervisor is the only one capable of executing privileged instructions at Ring 0, it handles this trap caused by the guest and provides an emulation of the desired instruction to the guest. This way, the guest will gain Ring 0 privilege through the help of the hypervisor. But its important to know that the guest is unaware of this emulation and it operates as if it was running on the bare hardware.&lt;br /&gt;
&lt;br /&gt;
=====Single-level architecture=====&lt;br /&gt;
The x86 based systems are based on the single-level of architecture virtualization support. In this hardware model, the host hypervisor L0 (running in Ring 0) handles all traps caused by any guest hypervisor running at any level of the virtualization stack. Assume that host hyperrvisor (L0) runs L1. When L1 attempts to run its own virtual machine called L2, this causes a trap that goes down to host hypervisor at level L0, L0 then handles the trap and initiates the required emulation for L1 to create L2. More generally, every trap occuring at level Ln, causes a drop to the L0 level where the host hypervisor resides. The host hypervisor then forwards this trap to the parent of Ln which is Ln-1, which in turn causes the trap to go down to L0 again, and so on. This trap handling switches keeps occurring until the desired emulated result reaches Ln to allow him the privilege to execute (Figure 2).&lt;br /&gt;
&lt;br /&gt;
[[File:single-levelV.png|thumb|right|Figure 2: The single-level architecture support for virtualization that relies on the host hypervisor (L0) to handle every trap caused by a guest.|left|400px]]&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A user can run a particular application or OS thats not compatible with the existing or running OS as a virtual machine. Operating systems could also provide the user a compatibily mode of other operating systems or applications, an example of this is the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues. [9]&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites to host their API and databases on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
We can also use nested virtualization for security purposes. One common example is virtual honeypots. A honeypot is basically a hollow program or network that appears to be functioning to outside users, but in reality, its only there as a security tool to watch or trap hacker attacks. By using nested virtualization, we can create a honeypot of our system as virtual machines and see how our virtual system is being attacked or what kind of features are being exploited. We can take advantage of the fact that those virtual honeypots can easily be controlled, manipulated, destroyed or even restored.&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents. [8]&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Research problem=&lt;br /&gt;
&lt;br /&gt;
Nested virtualization has been studied since the mid 1970s [4]. Early reasearch in the area assumes that there is hardware support for nested virtualization. Actual implementations of nested virtualization, such as the z/VM hypervisor in the early 1990s, also required architectural support. Other solutions assume the hypervisors and operating systems being virtualized have been modified to be compatabile with nested virtualization. There have also recently been software based solutions [5], however these solutions suffer from significant performance problems.&lt;br /&gt;
&lt;br /&gt;
The main barrier to having nested virtualization without architectural support is that, as you increase the levels of virtualization, the numer of control switches between different levels of hypervisors increases. A trap in a highly nested virtual machine first goes to the bottom level hypervisor, which can send it up to the second level hypervisor, which can in turn send it up (or back down), until it potentially in the worst case reaches the hypervisor that is one level below the virtual machine itself. The trap instruction can be bounced between different levels of hypervisor, which results in one trap instruction multiplying to many trap instructions. &lt;br /&gt;
&lt;br /&gt;
Generally, solutions that requie architectural support and specialized software for the guest machines are not practically useful because this support does not always exist, such as on x86 processors. Solutions that do not require this suffer from significant performance costs because of how the number of traps expands as nesting depth increases. This paper presents a technique to reconcile the lack of hardware support on available hardware with efficiency. It is for the most part able to contain the problem of a single nested trap expanding into many more trap instructions, at least for the nesting depth the authors considered, which allows efficient virtualization without architectural support.&lt;br /&gt;
&lt;br /&gt;
More specifically, virtualization deals with how to share the resources of the computer between multiple guest operating systems. Nested virtualization must share these resources between multiple guest operating systems and guest hypervisors. The authors acknowledge the CPU, memory, and IO devices as the three key resources that they need to share. Combining this, the paper presents a solution to the problem of how to multiplex the CPU, memory, and IO efficiently between multiple virtual operating systems and hypervisors on a system which has no architectural support for nested virtualization.&lt;br /&gt;
&lt;br /&gt;
=Contribution=&lt;br /&gt;
The non stop evolution of computers entices intricate designs that are virtualized and harmonious with cloud computing. The paper contributes to this belief by allowing consumers and users to inject machines with &#039;&#039;&#039;their&#039;&#039;&#039; choice of hypervisor/OS combination that provides grounds for security and compatibility. The sophisticated abstractions presented in the paper such as shadow paging and isolation of a single OS resources authorize programmers for further development and ideas which use this infrastructure. For example the paper Accountable Virtual Machines wraps programs around a particular state VM which could most definitely be placed on a separate hypervisor for ideal isolation.&lt;br /&gt;
&lt;br /&gt;
==Theory==&lt;br /&gt;
The fundamental idea of the Turtles Project relies on multiplexing the hardware among the involved guest virtual machines. When a virtual machine like L1 attempts to run L2, this triggers a trap that gets handled by L0, just as we illustrated earlier in the single-hardware architecture model. This trap includes the environment specifications that are needed to run L2 on the bare hardware. L0 converts L2&#039;s virtual memory to L1&#039;s virtual memory to make them run on the same level. Thus, the approach followed ends up flattening the virtualization levels and run them as L0 level virtual machines.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, the following question should be asked: if the host hypervisor ends up running and multiplexing the hardware among the guest virtual machines, then how can we keep track of the virtualization levels ? The answer lies within the host hypervisor. By using special control structures like the VMCS and the VMCB, the hypervisor has the ability to diffrentiate between the diffrent levels and keep track of each parent and guest at each level.&lt;br /&gt;
&lt;br /&gt;
==CPU Virtualization==&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle. The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. This Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
==Memory virtualization==&lt;br /&gt;
&lt;br /&gt;
The main idea with n = 2 nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to L1 physical and form a L1 physical to L0 physical address. 3 levels of translations however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical. They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
==I/O virtualization==&lt;br /&gt;
&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation [10], Para-virtualized drivers which know it on a driver [11][12] and Direct device assignment [13][14] which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts. the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
==Macro optimizations==&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a bare metal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only. They Optimized the transitions between L1 and L2. This involves an exit to L0 and then an entry. In L0 the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if its being modified and they carefully balance full copying versus partial copying and tracking. The VMCS are optimized further by copying multiple VMCS fields at once. Normally by intel&#039;s specification read or writes must be performed using vmread and vmwrite instruction (operate on a single field). VMCS data can be accessed without ill side-effects by bypassing vmread and vmwrite and copying multiple fields at once with large memory copies. (might not work on processors other than the ones they have tested).The main cause of this slowdown exit handling are additional exits caused by privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single l2 exit). By using AMD SVM the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
=Critique=&lt;br /&gt;
&lt;br /&gt;
=== The pros ===&lt;br /&gt;
&lt;br /&gt;
The paper unequivocally demonstrates strong input in the area of virtualization and data sharing within a single machine. It is aimed at programmers, and should not have too large of an effect on an end user running an application in a nested virtual machine. This is especially true if the user is using the system at a low depth. One can further argue that the most common use cases for nested virtualization that the authors mention in section 1, such as virtualizing OSs that are already hypervisors (like windows 7) and hypervisors in the cloud, will be at a shallow depth. It then follows that the testing the authors do in section 4 covers the most common use cases, so users can expect similar impressive performance. Nevertheless contribution is visible with respect to security and compatibility. On the security side, this nested virtualization technique can be used to study hypervisor level rootkits, such as bluepill [6], by hosting an infected hypervisor as a guest on top of another hypervisor.  Since this is the first successful implementation of this type that does not modify hardware (there have been half decent research designs), we expect to see increased interest in the nested integration model described above. The framework makes for convenient testing and debugging due to the fact that hypervisors can function inconspicuously towards other nested hypervisors and VMs without being detected. Moreover the efficiency overhead is reduced to 6-10% per level thanks to optimizations such as ommited vmwrites and direct paging (multi level paging technique) which sounds very appealing.&lt;br /&gt;
&lt;br /&gt;
=== The cons ===&lt;br /&gt;
&lt;br /&gt;
The main drawback is efficiency which appears as the authors introduce additional level of abstraction. The everlasting memory/efficiency dispute continues as nesting virtualization enters our lives. The performance hit is mainly imposed by exponentially generated exits which are caused when a nested guest traps, handing control to the lowest level hypervisor, which may hand off the trap to hypervisors above it before finally returning to the guest. So we can see that time-complexity might be an issue when multiple levels of virtualization are involved.&lt;br /&gt;
&lt;br /&gt;
Furthermore we observed that the paper performs tests at the L2 level, a guest with two hypervisors below it. It might have been useful to understand the limits of nesting if they investgated higher level of nesting such as L4 or L5. This is because it can be difficult to predict how the system will react to large levels of nesting, because the increase in the number of traps and other performance killing problems can potentially be exponential as the nesting gets deeper. Another significant detriment is that the paper links to optimizations such as vmread/vmwrite operations avoidance which are aimed at specific CPUs as stated on page 7, section 3.5: &amp;quot;(...) this optimization does not strictly adhere to the VMX specifications, and thus might not work on processors other than the ones we have tested&amp;quot;. This means that some of the techniques the authors use to increase performance are not reproducible on other systems, and so the generality of parts of their solution may be limited.&lt;br /&gt;
&lt;br /&gt;
=== The style and presentation ===&lt;br /&gt;
&lt;br /&gt;
The paper presents an elaborate description of the concept of nested virtualization in a very specific manner. It does a good job conveying the technical details. The paper does seem to assume a high level of background knowledge and familiarity with the subject, especially with some more technical points of the architecture the hardware uses to implement virtualization. For example, the paragraph 4.1.2 &amp;quot;Impact of Multidimensional paging&amp;quot; attempts to illustrate the technique by an example with terms such as ETP and L1, which may not be familiar to people not used to the technical language. The paper does, however, touch on a wide range of topics in the field of virtualization, including CPU, Memory and IO device virtualization. This wide scope means that many of the major components of virtualization are discussed, so in the process of understanding the paper one learns a lot about many different parts of the field.&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
The research showed in the paper is the first to achieve efficient x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. This is a major improvement over the current available solutions, and the techniques used to achieve nested virtualization are comprehensive and interesting. It also has good potential as a basis for future research. The authors refer to security and clouds as two potential areas for future research, another interesting area could be how the approaches the authors apply, the way they compress multiple levels of abstraction into one level with multi-dimensional paging and device assignment, could be applied to other problems that involve nesting. The paper also won the best paper award at the conference, further reflecting its quality.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] Tanenbaum, Andrew (2007).&#039;&#039; Modern Operating Systems (3rd edition)&#039;&#039;, page 569.&lt;br /&gt;
&lt;br /&gt;
[2] Popek &amp;amp; Goldberg (1974).  [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCkQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.141.4815%26rep%3Drep1%26type%3Dpdf&amp;amp;ei=uxD4TL_OOYeSswbbydzZCA&amp;amp;usg=AFQjCNEavbxNIe4sUwidBvE_3S8MXY3fHg&amp;amp;sig2=BS1tG9eadLRrKVItvb6gBg &#039;&#039;Formal requirements for virtualizable 3rd Generation architecture, section 1: Virtual machine concepts&#039;&#039; ]&lt;br /&gt;
&lt;br /&gt;
[3] Tanenbaum, Andrew (2007). &#039;&#039;Modern Operating Systems (3rd edition)&#039;&#039;, page 574-576.&lt;br /&gt;
&lt;br /&gt;
[4] Goldberg, P. [http://portal.acm.org/citation.cfm?id=800122.803950 Architecture of Virtual Machines]. In &#039;&#039;Proceedings of the Workshop on Virtual Computer Systems&#039;&#039;, ACM pp. 74-112&lt;br /&gt;
&lt;br /&gt;
[5] Berghmans, O. Nesting Virtual Machines in Virtualization Test Frameworks. Master&#039;s Thesis, Unversity of Antwerp, 2010.&lt;br /&gt;
&lt;br /&gt;
[6] Presentation by Joanna Rutkowska, Black Hat Briefings 2006.&lt;br /&gt;
&lt;br /&gt;
[7] Buytaert, Dittner &amp;amp; Rule. &#039;&#039;The best damn server virtualization book period.&#039;&#039; pages 16-18&lt;br /&gt;
&lt;br /&gt;
[8] Clark, Fraser, Hand, Hansen, Jul, Limpach, Pratt &amp;amp; Warfield. &#039;&#039;Live migration of virtual machines&#039;&#039;. page 273-286&lt;br /&gt;
&lt;br /&gt;
[9] Gillam, Lee (2010). &#039;&#039;Cloud Computing: Principles, Systems and Applications&#039;&#039;. page 26-27&lt;br /&gt;
&lt;br /&gt;
[10] Sugerman, Venkitachalm &amp;amp; Lim. (2001). [http://portal.acm.org/citation.cfm?id=1618525.1618534 &#039;&#039;Virtualizing I/O devices on VMware workstation’s hosted virtual machine monitor.&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[11] Russell, Rusty (2008). [http://portal.acm.org/citation.cfm?id=1400097.1400108&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;idx=J597&amp;amp;part=newsletter&amp;amp;WantType=Newsletters&amp;amp;title=ACM%20SIGOPS%20Operating%20Systems%20Review &#039;&#039;virtio: towards a de-facto standard for virtual I/O devices&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[12] Barham, Dragovic, Fraser, Hand, Harris, Ho, Neugebauer, Pratt &amp;amp; Warfield (2003). [http://portal.acm.org/citation.cfm?doid=945445.945462 &#039;&#039;Xen and the art of virtualization&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[13] Levasseur, Uhlig, Stoess &amp;amp; Gotz (2004). [http://portal.acm.org/citation.cfm?id=1251256 &#039;&#039;Unmodified device driver reuse and improved system dependability via virtual machines&#039;&#039;.]&lt;br /&gt;
&lt;br /&gt;
[14] Yassour, Ben-Yehuda &amp;amp; Wasserman (2008). [http://www.mulix.org/misc/hv.pdf D&#039;&#039;irect device for untrsuted fully-virtualized virtual machines&#039;&#039;]&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6828</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6828"/>
		<updated>2010-12-03T05:35:50Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* General discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other members should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
Just posted other bits that were missing in the backgrounds concepts section like the security uses, models of virtualization and para-virtualization. They&#039;re just a rough version however. I will edit them in the next few hours.I just need to write something for protection rings and that would be it I guess.&lt;br /&gt;
&lt;br /&gt;
I can help with the other sections for the rest of the day, I will try to post some summaries for performance and implementation or even the related work. --[[User:Hesperus|Hesperus]] 07:26, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Guys, we need to get moving here.. The contribution section still needs a lot. We need to talk about their innovations and the things they did there:&lt;br /&gt;
CPU virtualization, Memory virtualization, I/O virtualization and the Macro-optimizations.&lt;br /&gt;
&lt;br /&gt;
I will be posting something regarding this in the next few hours. --[[User:Hesperus|Hesperus]] 22:53, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
I have looked over the paper again and I am wondering about some things.  How are we to critique it?  By their methods, or by the paper itself?&lt;br /&gt;
I find that in the organization of the paper, they give you the links and extra information to look more in depth on such things like the VMC technology, but they almost use that as an excuse for not explaining things in the paper.&lt;br /&gt;
The VMC(0 -&amp;gt;1) annotation that isn&#039;t explained.  I understand what they mean, but it seems that they assume that you already know some things. --JSlonosky 03:03, 2 December 2010 (UTC)&lt;br /&gt;
-----------------&lt;br /&gt;
I think most research papers follow that kind of approach, they vaguely talk about the sideline things and provide references. The VMC technology from what I understood is just a creation of an environment to link or switch between hypervisors. --[[User:Hesperus|Hesperus]] 03:26, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
The instructions say that both style and content can be critiqued. I guess the organization of the paper would fall under style, but i&#039;m not sure how fair it is to critique how much they go in depth on certain things, especially some background stuff. After all, the audience of this paper is people who are already well versed in OS and virtualization stuff. That&#039;s not to say that we shouldn&#039;t bring it up, especially if we feel they don&#039;t sufficiently explain a new technique or notation they are using. &lt;br /&gt;
&lt;br /&gt;
I think it&#039;s also important to remember that our critique will contain things they have done well, not just things they could have done better. Considering that this paper got the best paper award at the largest OS conference, I think it&#039;s safe to say our critique will have many more good things than bad.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s some things they have done well on first inspection, just to get some ideas out there:&lt;br /&gt;
* Solution is extensible to an arbitrary nesting depth without major loss of performance&lt;br /&gt;
* Solution doesn&#039;t depend on modified hardware or software (expect for the lowest level hypervisor) we can reference previous solutions that do require modifications&lt;br /&gt;
* The paper doesn&#039;t ignore virtualizing I/O devices to an arbitrary nesting depth, other techniques do&lt;br /&gt;
* I think the paper does well in laying out the theoretical approach to the problem, as well as demonstrating impressive empirical results.&lt;br /&gt;
&lt;br /&gt;
I&#039;ll have some time to work on this tomorrow, probably clean up the research problem section, maybe kick off the contribution section if no one&#039;s started it, and put up some more extensive stuff for the critique. Let me know what you guys think, i&#039;m off to bed pretty soon, haha! --[[User:Mbingham|Mbingham]] 03:41, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
Okay, thanks for the clear up man. Sounds good.  I&#039;ll see what else I can do in between other work I got to do tonight.&lt;br /&gt;
One thing we should remember is to make sure that our essay clearly answers the question that is directed to it on the exam review.  If we get some other good ideas for questions, we should submit those to Anil as well.&lt;br /&gt;
Questions 1 and 2 relate to our essay, in my mind.&lt;br /&gt;
&amp;quot;What are two uses for nested virtual machines?&lt;br /&gt;
Multi-dimensional page tables are designed to avoid using shadow page tables in nested virtualization. What are shadow page tables, and when must they be used?&amp;quot;&lt;br /&gt;
--JSlonosky 04:47, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Hey guys. The points that Michael mentioned sound pretty great. I think the critique more or less depends on our understanding of the paper, so its not like theres a specific answer or something.&lt;br /&gt;
I will also be seeing the prof tomorrow in his office hours if anyone wants to join me, I will post something here before I go.&lt;br /&gt;
&lt;br /&gt;
The backgrounds section is done. I will keep editing it and filter some of the information. I don&#039;t have a lot of things to do today, so I will spend the whole day working on the paper and editing it and adding the references. I added some sub-sections for the contributions section. The theory part should just talk about the way they&#039;re flattening the levels of virtualization and multiplexing the hardware, I will try to write something for this. Then we go into the CPU, Memory, I/O and optimization. And I can see that someone already handled those things here in the discussion. So we&#039;re pretty much done. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;PS: Guys, please don;t forget about the references. We don&#039;t wanna get into any trouble with the prof in that regard.&#039;&#039;&#039;&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 08:51, 2 December 2010 (UTC)&lt;br /&gt;
------------------&lt;br /&gt;
Alright, I will do some Contribution section today or tonight so no worries. The critique, as i said i had added some stuff there but we still need to debate the good and bad of the design as perceived by our opinions since its a critique we can use first person. &amp;quot;I&amp;quot; and &amp;quot;To me&amp;quot;. --[[User:Praubic|Praubic]] 15:37, 2 December 2010 (UTC)&lt;br /&gt;
-------------------&lt;br /&gt;
Also, if each of us could contribute to the Critique part (here in the discussion) in point form and then we glue it together in concise sentences? We have to get straight to point. We are not aiming for length, rather content as you all know obviously. --[[User:Praubic|Praubic]] 15:53, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
Actually the contributions section is outlined below in the implementation here in the discussion page. So whoever did that should edit it and take it to the main page. I&#039;m ging to the office hours in 2 hours from now to ask the prof a couple of things incluing the critique. --[[User:Hesperus|Hesperus]] 15:58, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
I was just looking over the background concepts section, and had a couple of questions. Firstly, would it be possible to maybe scale the image down and have the text flow around it? Right now it seems to break the &amp;quot;flow&amp;quot; a bit, if that makes sense. Secondly, I think maybe we should think about consolidating some of the sub headings and stuff, I think it breaks the flow of the paper if we have a whole bunch of sub headings that only have a couple of sentences of explanation. Also, I added some stuff to the critique section on the talk page here (right at the bottom). I&#039;ll add some more later. Let me know what you guys think, and let us know how the meeting with Anil goes Hesperus. If I have time I&#039;ll try to come, but i&#039;ve got two other projects on the go right now too, haha. --[[User:Mbingham|Mbingham]] 16:56, 2 December 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
Honestly, I don&#039;t know how to scale down the picture and make the text flow through it but I will try later on tonight to resize it and make it smaller. Regarding the headings, yeah I can do that. I got sort of caught up with a lot of the terms and categorizations. I was even thinking about taking off the multiple-hardware support model, because its briefly mentioned in the paper and its not even available in x86 machines. I will ask the prof about those things. I will be seeing him in 30-40 minutes from now, his office hours start at 1:00 pm. Also if you guys notice any typos or misspellings, don&#039;t worry I will be editing the whole thing tonight. --[[User:Hesperus|Hesperus]] 17:36, 2 December 2010 (UTC)&lt;br /&gt;
--------------------------&lt;br /&gt;
Guys.. whoever did the implementation section below which is basically the contribution, should try to edit it and take it to the main page, I have already provided the headings for the contribution in the main page. I&#039;m currently working on the theory bit in that very same section. --[[User:Hesperus|Hesperus]] 17:43, 2 December 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&lt;br /&gt;
That was Csullvia.  I will go ahead and do it for him if he can&#039;t and has something else to do.--JSlonosky 18:32, 2 December 2010 (UTC)&lt;br /&gt;
-------&lt;br /&gt;
Ok, I didn&#039;t want to edit it myself, because I don&#039;t want to sound repetitive or redundant in my style. The prof should be locking the wiki sometime tomorrow at 7:00 am or 8:00 am, so we better get this finished tonight by 12 or something.&lt;br /&gt;
&lt;br /&gt;
I went and spoke with the prof in his office an hour ago. Regarding the critique, he pointed out a few things that I will be working on in the next few hours like the complexity of their design and whether it would remain efficent when applying multiple levels of virtualization. So I will write something on that, maybe we can combine our points into one paragraph or something.&lt;br /&gt;
&lt;br /&gt;
The headings, he said are fine. But he did mention that the article should make sense or be readable if we remove the headings or the section titles. I will be watching the discussion page frequently for comments and discussion. --[[User:Hesperus|Hesperus]] 20:10, 2 December 2010 (UTC)&lt;br /&gt;
--------&lt;br /&gt;
If you don&#039;t see any update until late at night, don&#039;t worry, I&#039;m coming back to do one final edit and grammar check for the whole article. &lt;br /&gt;
&#039;&#039;&#039;But please guys, if you have used any resources, then don&#039;t forget to add them&#039;&#039;&#039;. --[[User:Hesperus|Hesperus]] 21:31, 2 December 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
Cool. I see that Chris has added the contributions to the main page. I&#039;m currently adding the resources and will be adding a few other things later. --[[User:Hesperus|Hesperus]] 23:33, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Nice, nice I&#039;m currently working on Critique section. Anticipate updates, modify at will. --[[User:Praubic|Praubic]] 23:40, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
I&#039;m just writing out the good copy of another assignment, I should be done in about an hour and can work on whatever needs working on. --[[User:Mbingham|Mbingham]] 23:55, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
I am looking at some of the writing on the main page.  Would you guys mind if I just edit it a bit? Make it sound a bit better?  It&#039;s all your work --JSlonosky 03:37, 3 December 2010 (UTC)&lt;br /&gt;
----&lt;br /&gt;
Hey dude, I&#039;m currently editing the backgrounds concept one final edit. If you want, you can do the other parts. Also, we still need to work on the critique and the theory of operation. I&#039;m staying here for the next 2-3 hours. We need to get this done. --[[User:Hesperus|Hesperus]] 03:58, 3 December 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
I have edited and organized the pictures and figures I created for the backgrounds concepts. The backgrounds concepts needs no further editing as far as I&#039;m concerned. I will be writing something for the theory part of the contributions. As mentioned, I will do one final edit before going to bed! --[[User:Hesperus|Hesperus]] 04:29, 3 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Heya, yeah man I will go over the rest of them.  I looked at the critique, it looks relatively concise.  I might have to read the paper again and see what else I spot. --JSlonosky 05:35, 3 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work [2]:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work:&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but  they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts.  the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to  bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How they implement the Micro-Optimizations to make it go faster:&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a baremetal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only. They Optimized the transitions between L1 and L2. This  involves an exit to L0 and then an entry. In L0 the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if its being modified and they carefully balance full copying versus partial copying and tracking. The VMCS are optimized further by copying multiple VMCS fields at once. Normally by intel&#039;s specification read or writes must be performed using vmread and vmwrite instruction (operate on a single field). VMCS data can be accessed  without ill side-effects by bypassing&lt;br /&gt;
vmread and vmwrite and copying multiple fields at once with large memory copies. (might not work on processors other than the ones they have tested).The main cause of this slowdown exit handling are additional exits caused by&lt;br /&gt;
privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single l2 exit). By using AMD SVM the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
The paper unequivocally demonstrates strong input in the area of virtualization and data sharing within a single machine. It is aimed at programmers and does not affect the end-user in clearly detectable deviation regarding the usage of applications on top this architecture. Nevertheless contribution is visible with respect to security and compatibility. Since this is first successful implementations of this type that does not modify hardware (there have been half efficient designs), we expect to see increased interest in nested integration model described above.--[[User:Praubic|Praubic]] 23:37, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The framework makes for convenient testing and debugging due to the fact that hypervisors can function inconspicuously towards other nested hypervisors and VMs without being detected. Moreover the efficiency overhead is reduced to 6-10% per level thanks to optimizations such as ommited vmwrites and direct paging (multi level paging technique). &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE&#039;&#039;&#039;: They do mention a masters thesis by Berghmans (its citation 12 in the paper) that, if I understand it right, also talks about software only nested virtualization (they mention it in section 2 as well as in the video), but they claim it is inefficient because only the lowest level hypervisor is able to take advantage of hardware with virtualization support. In the turtles porject solution all levels of hypervisor can take advantage of any present virtualization support. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
* Writing, organization wise: They provide links and resources that can help give explanations to the concepts that they briefly touch upon&lt;br /&gt;
&lt;br /&gt;
* Relatively low performance cost for each level. As mentioned in the video the the team successfully achieved a 6 to 10% performance overhead for each nesting level of OS.&lt;br /&gt;
&lt;br /&gt;
* Thanks to several optimizations the efficiency is greatly improvement to acceptable level&lt;br /&gt;
         - Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions&lt;br /&gt;
         - Optimizing exit handling code and consequently reducing number of exits.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The main drawback is efficiency which appears as the authors introduce additional level of abstraction. The everlasting memory/efficiency dispute continues as nesting virtualization enters our lives. The performance hit is mainly imposed by exponentially generated exits. Furthermore we observed that the paper performs tests at the L2 level, a guest with two hypervisors below it. It might have been useful to understand the limits of nesting if they investgated higher level of nesting such as L4 or L5, just to see what the effect is. Another significant detriment in the paper is that optimizations such as vmread and vm write avoidance are aimed at specific machines (e.i . Intel).&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
* lots of exits cause significant performance cost.&lt;br /&gt;
&lt;br /&gt;
* Writing, Organization wise:  Some concepts, such as the VMC&#039;s, are written such that you should already be familiar with how they work, or read the appropriate references for that section of the research project&lt;br /&gt;
&lt;br /&gt;
* From quickly looking over their results section, it seems their tests are done at the L2 level, a guest with two hypervisors below it. I think it might have been useful to understand the limits of nesting if they did some tests at an even higher level of nesting, L4 or L5 or whatever, just to see what the effect is. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[2]  INTEL CORPORATION. Intel 64 and IA-32 Architectures Software Developers Manual. 2009&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6713</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6713"/>
		<updated>2010-12-03T03:42:28Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* General discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other members should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
Just posted other bits that were missing in the backgrounds concepts section like the security uses, models of virtualization and para-virtualization. They&#039;re just a rough version however. I will edit them in the next few hours.I just need to write something for protection rings and that would be it I guess.&lt;br /&gt;
&lt;br /&gt;
I can help with the other sections for the rest of the day, I will try to post some summaries for performance and implementation or even the related work. --[[User:Hesperus|Hesperus]] 07:26, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Guys, we need to get moving here.. The contribution section still needs a lot. We need to talk about their innovations and the things they did there:&lt;br /&gt;
CPU virtualization, Memory virtualization, I/O virtualization and the Macro-optimizations.&lt;br /&gt;
&lt;br /&gt;
I will be posting something regarding this in the next few hours. --[[User:Hesperus|Hesperus]] 22:53, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
I have looked over the paper again and I am wondering about some things.  How are we to critique it?  By their methods, or by the paper itself?&lt;br /&gt;
I find that in the organization of the paper, they give you the links and extra information to look more in depth on such things like the VMC technology, but they almost use that as an excuse for not explaining things in the paper.&lt;br /&gt;
The VMC(0 -&amp;gt;1) annotation that isn&#039;t explained.  I understand what they mean, but it seems that they assume that you already know some things. --JSlonosky 03:03, 2 December 2010 (UTC)&lt;br /&gt;
-----------------&lt;br /&gt;
I think most research papers follow that kind of approach, they vaguely talk about the sideline things and provide references. The VMC technology from what I understood is just a creation of an environment to link or switch between hypervisors. --[[User:Hesperus|Hesperus]] 03:26, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
The instructions say that both style and content can be critiqued. I guess the organization of the paper would fall under style, but i&#039;m not sure how fair it is to critique how much they go in depth on certain things, especially some background stuff. After all, the audience of this paper is people who are already well versed in OS and virtualization stuff. That&#039;s not to say that we shouldn&#039;t bring it up, especially if we feel they don&#039;t sufficiently explain a new technique or notation they are using. &lt;br /&gt;
&lt;br /&gt;
I think it&#039;s also important to remember that our critique will contain things they have done well, not just things they could have done better. Considering that this paper got the best paper award at the largest OS conference, I think it&#039;s safe to say our critique will have many more good things than bad.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s some things they have done well on first inspection, just to get some ideas out there:&lt;br /&gt;
* Solution is extensible to an arbitrary nesting depth without major loss of performance&lt;br /&gt;
* Solution doesn&#039;t depend on modified hardware or software (expect for the lowest level hypervisor) we can reference previous solutions that do require modifications&lt;br /&gt;
* The paper doesn&#039;t ignore virtualizing I/O devices to an arbitrary nesting depth, other techniques do&lt;br /&gt;
* I think the paper does well in laying out the theoretical approach to the problem, as well as demonstrating impressive empirical results.&lt;br /&gt;
&lt;br /&gt;
I&#039;ll have some time to work on this tomorrow, probably clean up the research problem section, maybe kick off the contribution section if no one&#039;s started it, and put up some more extensive stuff for the critique. Let me know what you guys think, i&#039;m off to bed pretty soon, haha! --[[User:Mbingham|Mbingham]] 03:41, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
Okay, thanks for the clear up man. Sounds good.  I&#039;ll see what else I can do in between other work I got to do tonight.&lt;br /&gt;
One thing we should remember is to make sure that our essay clearly answers the question that is directed to it on the exam review.  If we get some other good ideas for questions, we should submit those to Anil as well.&lt;br /&gt;
Questions 1 and 2 relate to our essay, in my mind.&lt;br /&gt;
&amp;quot;What are two uses for nested virtual machines?&lt;br /&gt;
Multi-dimensional page tables are designed to avoid using shadow page tables in nested virtualization. What are shadow page tables, and when must they be used?&amp;quot;&lt;br /&gt;
--JSlonosky 04:47, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Hey guys. The points that Michael mentioned sound pretty great. I think the critique more or less depends on our understanding of the paper, so its not like theres a specific answer or something.&lt;br /&gt;
I will also be seeing the prof tomorrow in his office hours if anyone wants to join me, I will post something here before I go.&lt;br /&gt;
&lt;br /&gt;
The backgrounds section is done. I will keep editing it and filter some of the information. I don&#039;t have a lot of things to do today, so I will spend the whole day working on the paper and editing it and adding the references. I added some sub-sections for the contributions section. The theory part should just talk about the way they&#039;re flattening the levels of virtualization and multiplexing the hardware, I will try to write something for this. Then we go into the CPU, Memory, I/O and optimization. And I can see that someone already handled those things here in the discussion. So we&#039;re pretty much done. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;PS: Guys, please don;t forget about the references. We don&#039;t wanna get into any trouble with the prof in that regard.&#039;&#039;&#039;&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 08:51, 2 December 2010 (UTC)&lt;br /&gt;
------------------&lt;br /&gt;
Alright, I will do some Contribution section today or tonight so no worries. The critique, as i said i had added some stuff there but we still need to debate the good and bad of the design as perceived by our opinions since its a critique we can use first person. &amp;quot;I&amp;quot; and &amp;quot;To me&amp;quot;. --[[User:Praubic|Praubic]] 15:37, 2 December 2010 (UTC)&lt;br /&gt;
-------------------&lt;br /&gt;
Also, if each of us could contribute to the Critique part (here in the discussion) in point form and then we glue it together in concise sentences? We have to get straight to point. We are not aiming for length, rather content as you all know obviously. --[[User:Praubic|Praubic]] 15:53, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
Actually the contributions section is outlined below in the implementation here in the discussion page. So whoever did that should edit it and take it to the main page. I&#039;m ging to the office hours in 2 hours from now to ask the prof a couple of things incluing the critique. --[[User:Hesperus|Hesperus]] 15:58, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
I was just looking over the background concepts section, and had a couple of questions. Firstly, would it be possible to maybe scale the image down and have the text flow around it? Right now it seems to break the &amp;quot;flow&amp;quot; a bit, if that makes sense. Secondly, I think maybe we should think about consolidating some of the sub headings and stuff, I think it breaks the flow of the paper if we have a whole bunch of sub headings that only have a couple of sentences of explanation. Also, I added some stuff to the critique section on the talk page here (right at the bottom). I&#039;ll add some more later. Let me know what you guys think, and let us know how the meeting with Anil goes Hesperus. If I have time I&#039;ll try to come, but i&#039;ve got two other projects on the go right now too, haha. --[[User:Mbingham|Mbingham]] 16:56, 2 December 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
Honestly, I don&#039;t know how to scale down the picture and make the text flow through it but I will try later on tonight to resize it and make it smaller. Regarding the headings, yeah I can do that. I got sort of caught up with a lot of the terms and categorizations. I was even thinking about taking off the multiple-hardware support model, because its briefly mentioned in the paper and its not even available in x86 machines. I will ask the prof about those things. I will be seeing him in 30-40 minutes from now, his office hours start at 1:00 pm. Also if you guys notice any typos or misspellings, don&#039;t worry I will be editing the whole thing tonight. --[[User:Hesperus|Hesperus]] 17:36, 2 December 2010 (UTC)&lt;br /&gt;
--------------------------&lt;br /&gt;
Guys.. whoever did the implementation section below which is basically the contribution, should try to edit it and take it to the main page, I have already provided the headings for the contribution in the main page. I&#039;m currently working on the theory bit in that very same section. --[[User:Hesperus|Hesperus]] 17:43, 2 December 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&lt;br /&gt;
That was Csullvia.  I will go ahead and do it for him if he can&#039;t and has something else to do.--JSlonosky 18:32, 2 December 2010 (UTC)&lt;br /&gt;
-------&lt;br /&gt;
Ok, I didn&#039;t want to edit it myself, because I don&#039;t want to sound repetitive or redundant in my style. The prof should be locking the wiki sometime tomorrow at 7:00 am or 8:00 am, so we better get this finished tonight by 12 or something.&lt;br /&gt;
&lt;br /&gt;
I went and spoke with the prof in his office an hour ago. Regarding the critique, he pointed out a few things that I will be working on in the next few hours like the complexity of their design and whether it would remain efficent when applying multiple levels of virtualization. So I will write something on that, maybe we can combine our points into one paragraph or something.&lt;br /&gt;
&lt;br /&gt;
The headings, he said are fine. But he did mention that the article should make sense or be readable if we remove the headings or the section titles. I will be watching the discussion page frequently for comments and discussion. --[[User:Hesperus|Hesperus]] 20:10, 2 December 2010 (UTC)&lt;br /&gt;
--------&lt;br /&gt;
If you don&#039;t see any update until late at night, don&#039;t worry, I&#039;m coming back to do one final edit and grammar check for the whole article. &lt;br /&gt;
&#039;&#039;&#039;But please guys, if you have used any resources, then don&#039;t forget to add them&#039;&#039;&#039;. --[[User:Hesperus|Hesperus]] 21:31, 2 December 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
Cool. I see that Chris has added the contributions to the main page. I&#039;m currently adding the resources and will be adding a few other things later. --[[User:Hesperus|Hesperus]] 23:33, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Nice, nice I&#039;m currently working on Critique section. Anticipate updates, modify at will. --[[User:Praubic|Praubic]] 23:40, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
I&#039;m just writing out the good copy of another assignment, I should be done in about an hour and can work on whatever needs working on. --[[User:Mbingham|Mbingham]] 23:55, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
I am looking at some of the writing on the main page.  Would you guys mind if I just edit it a bit? Make it sound a bit better?  It&#039;s all your work --JSlonosky 03:37, 3 December 2010 (UTC)&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work [2]:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work:&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but  they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts.  the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to  bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How they implement the Micro-Optimizations to make it go faster:&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a baremetal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only. They Optimized the transitions between L1 and L2. This  involves an exit to L0 and then an entry. In L0 the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if its being modified and they carefully balance full copying versus partial copying and tracking. The VMCS are optimized further by copying multiple VMCS fields at once. Normally by intel&#039;s specification read or writes must be performed using vmread and vmwrite instruction (operate on a single field). VMCS data can be accessed  without ill side-effects by bypassing&lt;br /&gt;
vmread and vmwrite and copying multiple fields at once with large memory copies. (might not work on processors other than the ones they have tested).The main cause of this slowdown exit handling are additional exits caused by&lt;br /&gt;
privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single l2 exit). By using AMD SVM the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
The paper unequivocally demonstrates strong input in the area of virtualization and data sharing within a single machine. It is aimed at programmers and does not affect the end-user in clearly detectable deviation regarding the usage of applications on top this architecture. Nevertheless contribution is visible with respect to security and compatibility. Since this is first successful implementations of this type that does not modify hardware (there have been half efficient designs), we expect to see increased interest in nested integration model described above.--[[User:Praubic|Praubic]] 23:37, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The framework makes for convenient testing and debugging due to the fact that hypervisors can function inconspicuously towards other nested hypervisors and VMs without being detected. Moreover the efficiency overhead is reduced to 6-10% per level thanks to optimizations such as ommited vmwrites and direct paging (multi level paging technique). &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE&#039;&#039;&#039;: They do mention a masters thesis by Berghmans (its citation 12 in the paper) that, if I understand it right, also talks about software only nested virtualization (they mention it in section 2 as well as in the video), but they claim it is inefficient because only the lowest level hypervisor is able to take advantage of hardware with virtualization support. In the turtles porject solution all levels of hypervisor can take advantage of any present virtualization support. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
* Writing, organization wise: They provide links and resources that can help give explanations to the concepts that they briefly touch upon&lt;br /&gt;
&lt;br /&gt;
* Relatively low performance cost for each level. As mentioned in the video the the team successfully achieved a 6 to 10% performance overhead for each nesting level of OS.&lt;br /&gt;
&lt;br /&gt;
* Thanks to several optimizations the efficiency is greatly improvement to acceptable level&lt;br /&gt;
         - Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions&lt;br /&gt;
         - Optimizing exit handling code and consequently reducing number of exits.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The main drawback is efficiency which appears as the authors introduce additional level of abstraction. The everlasting memory/efficiency dispute continues as nesting virtualization enters our lives. The performance hit is mainly imposed by exponentially generated exits. Furthermore we observed that the paper performs tests at the L2 level, a guest with two hypervisors below it. It might have been useful to understand the limits of nesting if they investgated higher level of nesting such as L4 or L5, just to see what the effect is. Another significant detriment in the paper is that optimizations such as vmread and vm write avoidance are aimed at specific machines (e.i . Intel).&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
* lots of exits cause significant performance cost.&lt;br /&gt;
&lt;br /&gt;
* Writing, Organization wise:  Some concepts, such as the VMC&#039;s, are written such that you should already be familiar with how they work, or read the appropriate references for that section of the research project&lt;br /&gt;
&lt;br /&gt;
* From quickly looking over their results section, it seems their tests are done at the L2 level, a guest with two hypervisors below it. I think it might have been useful to understand the limits of nesting if they did some tests at an even higher level of nesting, L4 or L5 or whatever, just to see what the effect is. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[2]  INTEL CORPORATION. Intel 64 and IA-32 Architectures Software Developers Manual. 2009&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6705</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6705"/>
		<updated>2010-12-03T03:37:11Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other members should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
Just posted other bits that were missing in the backgrounds concepts section like the security uses, models of virtualization and para-virtualization. They&#039;re just a rough version however. I will edit them in the next few hours.I just need to write something for protection rings and that would be it I guess.&lt;br /&gt;
&lt;br /&gt;
I can help with the other sections for the rest of the day, I will try to post some summaries for performance and implementation or even the related work. --[[User:Hesperus|Hesperus]] 07:26, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Guys, we need to get moving here.. The contribution section still needs a lot. We need to talk about their innovations and the things they did there:&lt;br /&gt;
CPU virtualization, Memory virtualization, I/O virtualization and the Macro-optimizations.&lt;br /&gt;
&lt;br /&gt;
I will be posting something regarding this in the next few hours. --[[User:Hesperus|Hesperus]] 22:53, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
I have looked over the paper again and I am wondering about some things.  How are we to critique it?  By their methods, or by the paper itself?&lt;br /&gt;
I find that in the organization of the paper, they give you the links and extra information to look more in depth on such things like the VMC technology, but they almost use that as an excuse for not explaining things in the paper.&lt;br /&gt;
The VMC(0 -&amp;gt;1) annotation that isn&#039;t explained.  I understand what they mean, but it seems that they assume that you already know some things. --JSlonosky 03:03, 2 December 2010 (UTC)&lt;br /&gt;
-----------------&lt;br /&gt;
I think most research papers follow that kind of approach, they vaguely talk about the sideline things and provide references. The VMC technology from what I understood is just a creation of an environment to link or switch between hypervisors. --[[User:Hesperus|Hesperus]] 03:26, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
The instructions say that both style and content can be critiqued. I guess the organization of the paper would fall under style, but i&#039;m not sure how fair it is to critique how much they go in depth on certain things, especially some background stuff. After all, the audience of this paper is people who are already well versed in OS and virtualization stuff. That&#039;s not to say that we shouldn&#039;t bring it up, especially if we feel they don&#039;t sufficiently explain a new technique or notation they are using. &lt;br /&gt;
&lt;br /&gt;
I think it&#039;s also important to remember that our critique will contain things they have done well, not just things they could have done better. Considering that this paper got the best paper award at the largest OS conference, I think it&#039;s safe to say our critique will have many more good things than bad.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s some things they have done well on first inspection, just to get some ideas out there:&lt;br /&gt;
* Solution is extensible to an arbitrary nesting depth without major loss of performance&lt;br /&gt;
* Solution doesn&#039;t depend on modified hardware or software (expect for the lowest level hypervisor) we can reference previous solutions that do require modifications&lt;br /&gt;
* The paper doesn&#039;t ignore virtualizing I/O devices to an arbitrary nesting depth, other techniques do&lt;br /&gt;
* I think the paper does well in laying out the theoretical approach to the problem, as well as demonstrating impressive empirical results.&lt;br /&gt;
&lt;br /&gt;
I&#039;ll have some time to work on this tomorrow, probably clean up the research problem section, maybe kick off the contribution section if no one&#039;s started it, and put up some more extensive stuff for the critique. Let me know what you guys think, i&#039;m off to bed pretty soon, haha! --[[User:Mbingham|Mbingham]] 03:41, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
Okay, thanks for the clear up man. Sounds good.  I&#039;ll see what else I can do in between other work I got to do tonight.&lt;br /&gt;
One thing we should remember is to make sure that our essay clearly answers the question that is directed to it on the exam review.  If we get some other good ideas for questions, we should submit those to Anil as well.&lt;br /&gt;
Questions 1 and 2 relate to our essay, in my mind.&lt;br /&gt;
&amp;quot;What are two uses for nested virtual machines?&lt;br /&gt;
Multi-dimensional page tables are designed to avoid using shadow page tables in nested virtualization. What are shadow page tables, and when must they be used?&amp;quot;&lt;br /&gt;
--JSlonosky 04:47, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Hey guys. The points that Michael mentioned sound pretty great. I think the critique more or less depends on our understanding of the paper, so its not like theres a specific answer or something.&lt;br /&gt;
I will also be seeing the prof tomorrow in his office hours if anyone wants to join me, I will post something here before I go.&lt;br /&gt;
&lt;br /&gt;
The backgrounds section is done. I will keep editing it and filter some of the information. I don&#039;t have a lot of things to do today, so I will spend the whole day working on the paper and editing it and adding the references. I added some sub-sections for the contributions section. The theory part should just talk about the way they&#039;re flattening the levels of virtualization and multiplexing the hardware, I will try to write something for this. Then we go into the CPU, Memory, I/O and optimization. And I can see that someone already handled those things here in the discussion. So we&#039;re pretty much done. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;PS: Guys, please don;t forget about the references. We don&#039;t wanna get into any trouble with the prof in that regard.&#039;&#039;&#039;&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 08:51, 2 December 2010 (UTC)&lt;br /&gt;
------------------&lt;br /&gt;
Alright, I will do some Contribution section today or tonight so no worries. The critique, as i said i had added some stuff there but we still need to debate the good and bad of the design as perceived by our opinions since its a critique we can use first person. &amp;quot;I&amp;quot; and &amp;quot;To me&amp;quot;. --[[User:Praubic|Praubic]] 15:37, 2 December 2010 (UTC)&lt;br /&gt;
-------------------&lt;br /&gt;
Also, if each of us could contribute to the Critique part (here in the discussion) in point form and then we glue it together in concise sentences? We have to get straight to point. We are not aiming for length, rather content as you all know obviously. --[[User:Praubic|Praubic]] 15:53, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
Actually the contributions section is outlined below in the implementation here in the discussion page. So whoever did that should edit it and take it to the main page. I&#039;m ging to the office hours in 2 hours from now to ask the prof a couple of things incluing the critique. --[[User:Hesperus|Hesperus]] 15:58, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
I was just looking over the background concepts section, and had a couple of questions. Firstly, would it be possible to maybe scale the image down and have the text flow around it? Right now it seems to break the &amp;quot;flow&amp;quot; a bit, if that makes sense. Secondly, I think maybe we should think about consolidating some of the sub headings and stuff, I think it breaks the flow of the paper if we have a whole bunch of sub headings that only have a couple of sentences of explanation. Also, I added some stuff to the critique section on the talk page here (right at the bottom). I&#039;ll add some more later. Let me know what you guys think, and let us know how the meeting with Anil goes Hesperus. If I have time I&#039;ll try to come, but i&#039;ve got two other projects on the go right now too, haha. --[[User:Mbingham|Mbingham]] 16:56, 2 December 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
Honestly, I don&#039;t know how to scale down the picture and make the text flow through it but I will try later on tonight to resize it and make it smaller. Regarding the headings, yeah I can do that. I got sort of caught up with a lot of the terms and categorizations. I was even thinking about taking off the multiple-hardware support model, because its briefly mentioned in the paper and its not even available in x86 machines. I will ask the prof about those things. I will be seeing him in 30-40 minutes from now, his office hours start at 1:00 pm. Also if you guys notice any typos or misspellings, don&#039;t worry I will be editing the whole thing tonight. --[[User:Hesperus|Hesperus]] 17:36, 2 December 2010 (UTC)&lt;br /&gt;
--------------------------&lt;br /&gt;
Guys.. whoever did the implementation section below which is basically the contribution, should try to edit it and take it to the main page, I have already provided the headings for the contribution in the main page. I&#039;m currently working on the theory bit in that very same section. --[[User:Hesperus|Hesperus]] 17:43, 2 December 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&lt;br /&gt;
That was Csullvia.  I will go ahead and do it for him if he can&#039;t and has something else to do.--JSlonosky 18:32, 2 December 2010 (UTC)&lt;br /&gt;
-------&lt;br /&gt;
Ok, I didn&#039;t want to edit it myself, because I don&#039;t want to sound repetitive or redundant in my style. The prof should be locking the wiki sometime tomorrow at 7:00 am or 8:00 am, so we better get this finished tonight by 12 or something.&lt;br /&gt;
&lt;br /&gt;
I went and spoke with the prof in his office an hour ago. Regarding the critique, he pointed out a few things that I will be working on in the next few hours like the complexity of their design and whether it would remain efficent when applying multiple levels of virtualization. So I will write something on that, maybe we can combine our points into one paragraph or something.&lt;br /&gt;
&lt;br /&gt;
The headings, he said are fine. But he did mention that the article should make sense or be readable if we remove the headings or the section titles. I will be watching the discussion page frequently for comments and discussion. --[[User:Hesperus|Hesperus]] 20:10, 2 December 2010 (UTC)&lt;br /&gt;
--------&lt;br /&gt;
If you don&#039;t see any update until late at night, don&#039;t worry, I&#039;m coming back to do one final edit and grammar check for the whole article. &lt;br /&gt;
&#039;&#039;&#039;But please guys, if you have used any resources, then don&#039;t forget to add them&#039;&#039;&#039;. --[[User:Hesperus|Hesperus]] 21:31, 2 December 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
Cool. I see that Chris has added the contributions to the main page. I&#039;m currently adding the resources and will be adding a few other things later. --[[User:Hesperus|Hesperus]] 23:33, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Nice, nice I&#039;m currently working on Critique section. Anticipate updates, modify at will. --[[User:Praubic|Praubic]] 23:40, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
I&#039;m just writing out the good copy of another assignment, I should be done in about an hour and can work on whatever needs working on. --[[User:Mbingham|Mbingham]] 23:55, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
I am looking at some of the writing on the main page.  Would you guys mind if I just edit it a bit? Make it sound a bit better?  It is all of your works --JSlonosky 03:37, 3 December 2010 (UTC)&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work [2]:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work:&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but  they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts.  the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to  bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How they implement the Micro-Optimizations to make it go faster:&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a baremetal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only. They Optimized the transitions between L1 and L2. This  involves an exit to L0 and then an entry. In L0 the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if its being modified and they carefully balance full copying versus partial copying and tracking. The VMCS are optimized further by copying multiple VMCS fields at once. Normally by intel&#039;s specification read or writes must be performed using vmread and vmwrite instruction (operate on a single field). VMCS data can be accessed  without ill side-effects by bypassing&lt;br /&gt;
vmread and vmwrite and copying multiple fields at once with large memory copies. (might not work on processors other than the ones they have tested).The main cause of this slowdown exit handling are additional exits caused by&lt;br /&gt;
privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single l2 exit). By using AMD SVM the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
The paper unequivocally demonstrates strong input in the area of virtualization and data sharing within a single machine. It is aimed at programmers and does not affect the end-user in clearly detectable deviation regarding the usage of applications on top this architecture. Nevertheless contribution is visible with respect to security and compatibility. Since this is first successful implementations of this type that does not modify hardware (there have been half efficient designs), we expect to see increased interest in nested integration model described above.--[[User:Praubic|Praubic]] 23:37, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The framework makes for convenient testing and debugging due to the fact that hypervisors can function inconspicuously towards other nested hypervisors and VMs without being detected. Moreover the efficiency overhead is reduced to 6-10% per level thanks to optimizations such as ommited vmwrites and direct paging (multi level paging technique). &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE&#039;&#039;&#039;: They do mention a masters thesis by Berghmans (its citation 12 in the paper) that, if I understand it right, also talks about software only nested virtualization (they mention it in section 2 as well as in the video), but they claim it is inefficient because only the lowest level hypervisor is able to take advantage of hardware with virtualization support. In the turtles porject solution all levels of hypervisor can take advantage of any present virtualization support. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
* Writing, organization wise: They provide links and resources that can help give explanations to the concepts that they briefly touch upon&lt;br /&gt;
&lt;br /&gt;
* Relatively low performance cost for each level. As mentioned in the video the the team successfully achieved a 6 to 10% performance overhead for each nesting level of OS.&lt;br /&gt;
&lt;br /&gt;
* Thanks to several optimizations the efficiency is greatly improvement to acceptable level&lt;br /&gt;
         - Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions&lt;br /&gt;
         - Optimizing exit handling code and consequently reducing number of exits.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The main drawback is efficiency which appears as the authors introduce additional level of abstraction. The everlasting memory/efficiency dispute continues as nesting virtualization enters our lives. The performance hit is mainly imposed by exponentially generated exits. Furthermore we observed that the paper performs tests at the L2 level, a guest with two hypervisors below it. It might have been useful to understand the limits of nesting if they investgated higher level of nesting such as L4 or L5, just to see what the effect is. Another significant detriment in the paper is that optimizations such as vmread and vm write avoidance are aimed at specific machines (e.i . Intel).&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
* lots of exits cause significant performance cost.&lt;br /&gt;
&lt;br /&gt;
* Writing, Organization wise:  Some concepts, such as the VMC&#039;s, are written such that you should already be familiar with how they work, or read the appropriate references for that section of the research project&lt;br /&gt;
&lt;br /&gt;
* From quickly looking over their results section, it seems their tests are done at the L2 level, a guest with two hypervisors below it. I think it might have been useful to understand the limits of nesting if they did some tests at an even higher level of nesting, L4 or L5 or whatever, just to see what the effect is. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[2]  INTEL CORPORATION. Intel 64 and IA-32 Architectures Software Developers Manual. 2009&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6608</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6608"/>
		<updated>2010-12-03T00:53:48Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other members should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
Just posted other bits that were missing in the backgrounds concepts section like the security uses, models of virtualization and para-virtualization. They&#039;re just a rough version however. I will edit them in the next few hours.I just need to write something for protection rings and that would be it I guess.&lt;br /&gt;
&lt;br /&gt;
I can help with the other sections for the rest of the day, I will try to post some summaries for performance and implementation or even the related work. --[[User:Hesperus|Hesperus]] 07:26, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Guys, we need to get moving here.. The contribution section still needs a lot. We need to talk about their innovations and the things they did there:&lt;br /&gt;
CPU virtualization, Memory virtualization, I/O virtualization and the Macro-optimizations.&lt;br /&gt;
&lt;br /&gt;
I will be posting something regarding this in the next few hours. --[[User:Hesperus|Hesperus]] 22:53, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
I have looked over the paper again and I am wondering about some things.  How are we to critique it?  By their methods, or by the paper itself?&lt;br /&gt;
I find that in the organization of the paper, they give you the links and extra information to look more in depth on such things like the VMC technology, but they almost use that as an excuse for not explaining things in the paper.&lt;br /&gt;
The VMC(0 -&amp;gt;1) annotation that isn&#039;t explained.  I understand what they mean, but it seems that they assume that you already know some things. --JSlonosky 03:03, 2 December 2010 (UTC)&lt;br /&gt;
-----------------&lt;br /&gt;
I think most research papers follow that kind of approach, they vaguely talk about the sideline things and provide references. The VMC technology from what I understood is just a creation of an environment to link or switch between hypervisors. --[[User:Hesperus|Hesperus]] 03:26, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
The instructions say that both style and content can be critiqued. I guess the organization of the paper would fall under style, but i&#039;m not sure how fair it is to critique how much they go in depth on certain things, especially some background stuff. After all, the audience of this paper is people who are already well versed in OS and virtualization stuff. That&#039;s not to say that we shouldn&#039;t bring it up, especially if we feel they don&#039;t sufficiently explain a new technique or notation they are using. &lt;br /&gt;
&lt;br /&gt;
I think it&#039;s also important to remember that our critique will contain things they have done well, not just things they could have done better. Considering that this paper got the best paper award at the largest OS conference, I think it&#039;s safe to say our critique will have many more good things than bad.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s some things they have done well on first inspection, just to get some ideas out there:&lt;br /&gt;
* Solution is extensible to an arbitrary nesting depth without major loss of performance&lt;br /&gt;
* Solution doesn&#039;t depend on modified hardware or software (expect for the lowest level hypervisor) we can reference previous solutions that do require modifications&lt;br /&gt;
* The paper doesn&#039;t ignore virtualizing I/O devices to an arbitrary nesting depth, other techniques do&lt;br /&gt;
* I think the paper does well in laying out the theoretical approach to the problem, as well as demonstrating impressive empirical results.&lt;br /&gt;
&lt;br /&gt;
I&#039;ll have some time to work on this tomorrow, probably clean up the research problem section, maybe kick off the contribution section if no one&#039;s started it, and put up some more extensive stuff for the critique. Let me know what you guys think, i&#039;m off to bed pretty soon, haha! --[[User:Mbingham|Mbingham]] 03:41, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
Okay, thanks for the clear up man. Sounds good.  I&#039;ll see what else I can do in between other work I got to do tonight.&lt;br /&gt;
One thing we should remember is to make sure that our essay clearly answers the question that is directed to it on the exam review.  If we get some other good ideas for questions, we should submit those to Anil as well.&lt;br /&gt;
Questions 1 and 2 relate to our essay, in my mind.&lt;br /&gt;
&amp;quot;What are two uses for nested virtual machines?&lt;br /&gt;
Multi-dimensional page tables are designed to avoid using shadow page tables in nested virtualization. What are shadow page tables, and when must they be used?&amp;quot;&lt;br /&gt;
--JSlonosky 04:47, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Hey guys. The points that Michael mentioned sound pretty great. I think the critique more or less depends on our understanding of the paper, so its not like theres a specific answer or something.&lt;br /&gt;
I will also be seeing the prof tomorrow in his office hours if anyone wants to join me, I will post something here before I go.&lt;br /&gt;
&lt;br /&gt;
The backgrounds section is done. I will keep editing it and filter some of the information. I don&#039;t have a lot of things to do today, so I will spend the whole day working on the paper and editing it and adding the references. I added some sub-sections for the contributions section. The theory part should just talk about the way they&#039;re flattening the levels of virtualization and multiplexing the hardware, I will try to write something for this. Then we go into the CPU, Memory, I/O and optimization. And I can see that someone already handled those things here in the discussion. So we&#039;re pretty much done. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;PS: Guys, please don;t forget about the references. We don&#039;t wanna get into any trouble with the prof in that regard.&#039;&#039;&#039;&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 08:51, 2 December 2010 (UTC)&lt;br /&gt;
------------------&lt;br /&gt;
Alright, I will do some Contribution section today or tonight so no worries. The critique, as i said i had added some stuff there but we still need to debate the good and bad of the design as perceived by our opinions since its a critique we can use first person. &amp;quot;I&amp;quot; and &amp;quot;To me&amp;quot;. --[[User:Praubic|Praubic]] 15:37, 2 December 2010 (UTC)&lt;br /&gt;
-------------------&lt;br /&gt;
Also, if each of us could contribute to the Critique part (here in the discussion) in point form and then we glue it together in concise sentences? We have to get straight to point. We are not aiming for length, rather content as you all know obviously. --[[User:Praubic|Praubic]] 15:53, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
Actually the contributions section is outlined below in the implementation here in the discussion page. So whoever did that should edit it and take it to the main page. I&#039;m ging to the office hours in 2 hours from now to ask the prof a couple of things incluing the critique. --[[User:Hesperus|Hesperus]] 15:58, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
I was just looking over the background concepts section, and had a couple of questions. Firstly, would it be possible to maybe scale the image down and have the text flow around it? Right now it seems to break the &amp;quot;flow&amp;quot; a bit, if that makes sense. Secondly, I think maybe we should think about consolidating some of the sub headings and stuff, I think it breaks the flow of the paper if we have a whole bunch of sub headings that only have a couple of sentences of explanation. Also, I added some stuff to the critique section on the talk page here (right at the bottom). I&#039;ll add some more later. Let me know what you guys think, and let us know how the meeting with Anil goes Hesperus. If I have time I&#039;ll try to come, but i&#039;ve got two other projects on the go right now too, haha. --[[User:Mbingham|Mbingham]] 16:56, 2 December 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
Honestly, I don&#039;t know how to scale down the picture and make the text flow through it but I will try later on tonight to resize it and make it smaller. Regarding the headings, yeah I can do that. I got sort of caught up with a lot of the terms and categorizations. I was even thinking about taking off the multiple-hardware support model, because its briefly mentioned in the paper and its not even available in x86 machines. I will ask the prof about those things. I will be seeing him in 30-40 minutes from now, his office hours start at 1:00 pm. Also if you guys notice any typos or misspellings, don&#039;t worry I will be editing the whole thing tonight. --[[User:Hesperus|Hesperus]] 17:36, 2 December 2010 (UTC)&lt;br /&gt;
--------------------------&lt;br /&gt;
Guys.. whoever did the implementation section below which is basically the contribution, should try to edit it and take it to the main page, I have already provided the headings for the contribution in the main page. I&#039;m currently working on the theory bit in that very same section. --[[User:Hesperus|Hesperus]] 17:43, 2 December 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&lt;br /&gt;
That was Csullvia.  I will go ahead and do it for him if he can&#039;t and has something else to do.--JSlonosky 18:32, 2 December 2010 (UTC)&lt;br /&gt;
-------&lt;br /&gt;
Ok, I didn&#039;t want to edit it myself, because I don&#039;t want to sound repetitive or redundant in my style. The prof should be locking the wiki sometime tomorrow at 7:00 am or 8:00 am, so we better get this finished tonight by 12 or something.&lt;br /&gt;
&lt;br /&gt;
I went and spoke with the prof in his office an hour ago. Regarding the critique, he pointed out a few things that I will be working on in the next few hours like the complexity of their design and whether it would remain efficent when applying multiple levels of virtualization. So I will write something on that, maybe we can combine our points into one paragraph or something.&lt;br /&gt;
&lt;br /&gt;
The headings, he said are fine. But he did mention that the article should make sense or be readable if we remove the headings or the section titles. I will be watching the discussion page frequently for comments and discussion. --[[User:Hesperus|Hesperus]] 20:10, 2 December 2010 (UTC)&lt;br /&gt;
--------&lt;br /&gt;
If you don&#039;t see any update until late at night, don&#039;t worry, I&#039;m coming back to do one final edit and grammar check for the whole article. &lt;br /&gt;
&#039;&#039;&#039;But please guys, if you have used any resources, then don&#039;t forget to add them&#039;&#039;&#039;. --[[User:Hesperus|Hesperus]] 21:31, 2 December 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
Cool. I see that Chris has added the contributions to the main page. I&#039;m currently adding the resources and will be adding a few other things later. --[[User:Hesperus|Hesperus]] 23:33, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Nice, nice I&#039;m currently working on Critique section. Anticipate updates, modify at will. --[[User:Praubic|Praubic]] 23:40, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
I&#039;m just writing out the good copy of another assignment, I should be done in about an hour and can work on whatever needs working on. --[[User:Mbingham|Mbingham]] 23:55, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work [2]:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work:&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but  they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts.  the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to  bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How they implement the Micro-Optimizations to make it go faster:&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a baremetal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only. They Optimized the transitions between L1 and L2. This  involves an exit to L0 and then an entry. In L0 the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if its being modified and they carefully balance full copying versus partial copying and tracking. The VMCS are optimized further by copying multiple VMCS fields at once. Normally by intel&#039;s specification read or writes must be performed using vmread and vmwrite instruction (operate on a single field). VMCS data can be accessed  without ill side-effects by bypassing&lt;br /&gt;
vmread and vmwrite and copying multiple fields at once with large memory copies. (might not work on processors other than the ones they have tested).The main cause of this slowdown exit handling are additional exits caused by&lt;br /&gt;
privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single l2 exit). By using AMD SVM the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
The paper unequivocally demonstrates strong input in the area of virtualization and data sharing within a single machine. It is aimed at programmers and does not affect the end-user in clearly detectable deviation regarding the usage of applications on top this architecture. Nevertheless contribution is visible with respect to security and compatibility. Since this is first successful implementations of this type that does not modify hardware (there have been half efficient designs), we expect to see increased interest in nested integration model described above.--[[User:Praubic|Praubic]] 23:37, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The framework makes for convenient testing and debugging due to the fact that hypervisors can function inconspicuously towards other nested hypervisors and VMs without being detected. Moreover the efficiency overhead is reduced to 6-10% per level thanks to optimizations such as ommited vmwrites and direct paging (multi level paging technique). &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE&#039;&#039;&#039;: They do mention a masters thesis by Berghmans (its citation 12 in the paper) that, if I understand it right, also talks about software only nested virtualization (they mention it in section 2 as well as in the video), but they claim it is inefficient because only the lowest level hypervisor is able to take advantage of hardware with virtualization support. In the turtles porject solution all levels of hypervisor can take advantage of any present virtualization support. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
* Writing, organization wise: They provide links and resources that can help give explanations to the concepts that they briefly touch upon&lt;br /&gt;
&lt;br /&gt;
* Relatively low performance cost for each level. As mentioned in the video the the team successfully achieved a 6 to 10% performance overhead for each nesting level of OS.&lt;br /&gt;
&lt;br /&gt;
* Thanks to several optimizations the efficiency is greatly improvement to acceptable level&lt;br /&gt;
         - Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions&lt;br /&gt;
         - Optimizing exit handling code and consequently reducing number of exits.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The main drawback is efficiency which appears as the authors introduce additional level of abstraction. The everlasting memory/efficiency dispute continues as nesting virtualization enters our lives. The performance hit is mainly imposed by exponentially generated exits. Furthermore we observed that the paper performs tests at the L2 level, a guest with two hypervisors below it. It might have been useful to understand the limits of nesting if they investgated higher level of nesting such as L4 or L5, just to see what the effect is. Another significant detriment in the paper is that optimizations such as vmread and vm write avoidance are aimed at specific machines (e.i . Intel).&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
* lots of exits cause significant performance cost.&lt;br /&gt;
&lt;br /&gt;
* Writing, Organization wise:  Some concepts, such as the VMC&#039;s, are written such that you should already be familiar with how they work, or read the appropriate references for that section of the research project&lt;br /&gt;
&lt;br /&gt;
* From quickly looking over their results section, it seems their tests are done at the L2 level, a guest with two hypervisors below it. I think it might have been useful to understand the limits of nesting if they did some tests at an even higher level of nesting, L4 or L5 or whatever, just to see what the effect is. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[2]  INTEL CORPORATION. Intel 64 and IA-32 Architectures Software Developers Manual. 2009&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6607</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6607"/>
		<updated>2010-12-03T00:53:34Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* Implementation */  Just added reference link. nothing much&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other members should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
Just posted other bits that were missing in the backgrounds concepts section like the security uses, models of virtualization and para-virtualization. They&#039;re just a rough version however. I will edit them in the next few hours.I just need to write something for protection rings and that would be it I guess.&lt;br /&gt;
&lt;br /&gt;
I can help with the other sections for the rest of the day, I will try to post some summaries for performance and implementation or even the related work. --[[User:Hesperus|Hesperus]] 07:26, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Guys, we need to get moving here.. The contribution section still needs a lot. We need to talk about their innovations and the things they did there:&lt;br /&gt;
CPU virtualization, Memory virtualization, I/O virtualization and the Macro-optimizations.&lt;br /&gt;
&lt;br /&gt;
I will be posting something regarding this in the next few hours. --[[User:Hesperus|Hesperus]] 22:53, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
I have looked over the paper again and I am wondering about some things.  How are we to critique it?  By their methods, or by the paper itself?&lt;br /&gt;
I find that in the organization of the paper, they give you the links and extra information to look more in depth on such things like the VMC technology, but they almost use that as an excuse for not explaining things in the paper.&lt;br /&gt;
The VMC(0 -&amp;gt;1) annotation that isn&#039;t explained.  I understand what they mean, but it seems that they assume that you already know some things. --JSlonosky 03:03, 2 December 2010 (UTC)&lt;br /&gt;
-----------------&lt;br /&gt;
I think most research papers follow that kind of approach, they vaguely talk about the sideline things and provide references. The VMC technology from what I understood is just a creation of an environment to link or switch between hypervisors. --[[User:Hesperus|Hesperus]] 03:26, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
The instructions say that both style and content can be critiqued. I guess the organization of the paper would fall under style, but i&#039;m not sure how fair it is to critique how much they go in depth on certain things, especially some background stuff. After all, the audience of this paper is people who are already well versed in OS and virtualization stuff. That&#039;s not to say that we shouldn&#039;t bring it up, especially if we feel they don&#039;t sufficiently explain a new technique or notation they are using. &lt;br /&gt;
&lt;br /&gt;
I think it&#039;s also important to remember that our critique will contain things they have done well, not just things they could have done better. Considering that this paper got the best paper award at the largest OS conference, I think it&#039;s safe to say our critique will have many more good things than bad.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s some things they have done well on first inspection, just to get some ideas out there:&lt;br /&gt;
* Solution is extensible to an arbitrary nesting depth without major loss of performance&lt;br /&gt;
* Solution doesn&#039;t depend on modified hardware or software (expect for the lowest level hypervisor) we can reference previous solutions that do require modifications&lt;br /&gt;
* The paper doesn&#039;t ignore virtualizing I/O devices to an arbitrary nesting depth, other techniques do&lt;br /&gt;
* I think the paper does well in laying out the theoretical approach to the problem, as well as demonstrating impressive empirical results.&lt;br /&gt;
&lt;br /&gt;
I&#039;ll have some time to work on this tomorrow, probably clean up the research problem section, maybe kick off the contribution section if no one&#039;s started it, and put up some more extensive stuff for the critique. Let me know what you guys think, i&#039;m off to bed pretty soon, haha! --[[User:Mbingham|Mbingham]] 03:41, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
Okay, thanks for the clear up man. Sounds good.  I&#039;ll see what else I can do in between other work I got to do tonight.&lt;br /&gt;
One thing we should remember is to make sure that our essay clearly answers the question that is directed to it on the exam review.  If we get some other good ideas for questions, we should submit those to Anil as well.&lt;br /&gt;
Questions 1 and 2 relate to our essay, in my mind.&lt;br /&gt;
&amp;quot;What are two uses for nested virtual machines?&lt;br /&gt;
Multi-dimensional page tables are designed to avoid using shadow page tables in nested virtualization. What are shadow page tables, and when must they be used?&amp;quot;&lt;br /&gt;
--JSlonosky 04:47, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Hey guys. The points that Michael mentioned sound pretty great. I think the critique more or less depends on our understanding of the paper, so its not like theres a specific answer or something.&lt;br /&gt;
I will also be seeing the prof tomorrow in his office hours if anyone wants to join me, I will post something here before I go.&lt;br /&gt;
&lt;br /&gt;
The backgrounds section is done. I will keep editing it and filter some of the information. I don&#039;t have a lot of things to do today, so I will spend the whole day working on the paper and editing it and adding the references. I added some sub-sections for the contributions section. The theory part should just talk about the way they&#039;re flattening the levels of virtualization and multiplexing the hardware, I will try to write something for this. Then we go into the CPU, Memory, I/O and optimization. And I can see that someone already handled those things here in the discussion. So we&#039;re pretty much done. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;PS: Guys, please don;t forget about the references. We don&#039;t wanna get into any trouble with the prof in that regard.&#039;&#039;&#039;&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 08:51, 2 December 2010 (UTC)&lt;br /&gt;
------------------&lt;br /&gt;
Alright, I will do some Contribution section today or tonight so no worries. The critique, as i said i had added some stuff there but we still need to debate the good and bad of the design as perceived by our opinions since its a critique we can use first person. &amp;quot;I&amp;quot; and &amp;quot;To me&amp;quot;. --[[User:Praubic|Praubic]] 15:37, 2 December 2010 (UTC)&lt;br /&gt;
-------------------&lt;br /&gt;
Also, if each of us could contribute to the Critique part (here in the discussion) in point form and then we glue it together in concise sentences? We have to get straight to point. We are not aiming for length, rather content as you all know obviously. --[[User:Praubic|Praubic]] 15:53, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
Actually the contributions section is outlined below in the implementation here in the discussion page. So whoever did that should edit it and take it to the main page. I&#039;m ging to the office hours in 2 hours from now to ask the prof a couple of things incluing the critique. --[[User:Hesperus|Hesperus]] 15:58, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
I was just looking over the background concepts section, and had a couple of questions. Firstly, would it be possible to maybe scale the image down and have the text flow around it? Right now it seems to break the &amp;quot;flow&amp;quot; a bit, if that makes sense. Secondly, I think maybe we should think about consolidating some of the sub headings and stuff, I think it breaks the flow of the paper if we have a whole bunch of sub headings that only have a couple of sentences of explanation. Also, I added some stuff to the critique section on the talk page here (right at the bottom). I&#039;ll add some more later. Let me know what you guys think, and let us know how the meeting with Anil goes Hesperus. If I have time I&#039;ll try to come, but i&#039;ve got two other projects on the go right now too, haha. --[[User:Mbingham|Mbingham]] 16:56, 2 December 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
Honestly, I don&#039;t know how to scale down the picture and make the text flow through it but I will try later on tonight to resize it and make it smaller. Regarding the headings, yeah I can do that. I got sort of caught up with a lot of the terms and categorizations. I was even thinking about taking off the multiple-hardware support model, because its briefly mentioned in the paper and its not even available in x86 machines. I will ask the prof about those things. I will be seeing him in 30-40 minutes from now, his office hours start at 1:00 pm. Also if you guys notice any typos or misspellings, don&#039;t worry I will be editing the whole thing tonight. --[[User:Hesperus|Hesperus]] 17:36, 2 December 2010 (UTC)&lt;br /&gt;
--------------------------&lt;br /&gt;
Guys.. whoever did the implementation section below which is basically the contribution, should try to edit it and take it to the main page, I have already provided the headings for the contribution in the main page. I&#039;m currently working on the theory bit in that very same section. --[[User:Hesperus|Hesperus]] 17:43, 2 December 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&lt;br /&gt;
That was Csullvia.  I will go ahead and do it for him if he can&#039;t and has something else to do.--JSlonosky 18:32, 2 December 2010 (UTC)&lt;br /&gt;
-------&lt;br /&gt;
Ok, I didn&#039;t want to edit it myself, because I don&#039;t want to sound repetitive or redundant in my style. The prof should be locking the wiki sometime tomorrow at 7:00 am or 8:00 am, so we better get this finished tonight by 12 or something.&lt;br /&gt;
&lt;br /&gt;
I went and spoke with the prof in his office an hour ago. Regarding the critique, he pointed out a few things that I will be working on in the next few hours like the complexity of their design and whether it would remain efficent when applying multiple levels of virtualization. So I will write something on that, maybe we can combine our points into one paragraph or something.&lt;br /&gt;
&lt;br /&gt;
The headings, he said are fine. But he did mention that the article should make sense or be readable if we remove the headings or the section titles. I will be watching the discussion page frequently for comments and discussion. --[[User:Hesperus|Hesperus]] 20:10, 2 December 2010 (UTC)&lt;br /&gt;
--------&lt;br /&gt;
If you don&#039;t see any update until late at night, don&#039;t worry, I&#039;m coming back to do one final edit and grammar check for the whole article. &lt;br /&gt;
&#039;&#039;&#039;But please guys, if you have used any resources, then don&#039;t forget to add them&#039;&#039;&#039;. --[[User:Hesperus|Hesperus]] 21:31, 2 December 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
Cool. I see that Chris has added the contributions to the main page. I&#039;m currently adding the resources and will be adding a few other things later. --[[User:Hesperus|Hesperus]] 23:33, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Nice, nice I&#039;m currently working on Critique section. Anticipate updates, modify at will. --[[User:Praubic|Praubic]] 23:40, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
I&#039;m just writing out the good copy of another assignment, I should be done in about an hour and can work on whatever needs working on. --[[User:Mbingham|Mbingham]] 23:55, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work [2]:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work:&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but  they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts.  the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to  bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How they implement the Micro-Optimizations to make it go faster:&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a baremetal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only. They Optimized the transitions between L1 and L2. This  involves an exit to L0 and then an entry. In L0 the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if its being modified and they carefully balance full copying versus partial copying and tracking. The VMCS are optimized further by copying multiple VMCS fields at once. Normally by intel&#039;s specification read or writes must be performed using vmread and vmwrite instruction (operate on a single field). VMCS data can be accessed  without ill side-effects by bypassing&lt;br /&gt;
vmread and vmwrite and copying multiple fields at once with large memory copies. (might not work on processors other than the ones they have tested).The main cause of this slowdown exit handling are additional exits caused by&lt;br /&gt;
privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single l2 exit). By using AMD SVM the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
The paper unequivocally demonstrates strong input in the area of virtualization and data sharing within a single machine. It is aimed at programmers and does not affect the end-user in clearly detectable deviation regarding the usage of applications on top this architecture. Nevertheless contribution is visible with respect to security and compatibility. Since this is first successful implementations of this type that does not modify hardware (there have been half efficient designs), we expect to see increased interest in nested integration model described above.--[[User:Praubic|Praubic]] 23:37, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The framework makes for convenient testing and debugging due to the fact that hypervisors can function inconspicuously towards other nested hypervisors and VMs without being detected. Moreover the efficiency overhead is reduced to 6-10% per level thanks to optimizations such as ommited vmwrites and direct paging (multi level paging technique). &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE&#039;&#039;&#039;: They do mention a masters thesis by Berghmans (its citation 12 in the paper) that, if I understand it right, also talks about software only nested virtualization (they mention it in section 2 as well as in the video), but they claim it is inefficient because only the lowest level hypervisor is able to take advantage of hardware with virtualization support. In the turtles porject solution all levels of hypervisor can take advantage of any present virtualization support. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
* Writing, organization wise: They provide links and resources that can help give explanations to the concepts that they briefly touch upon&lt;br /&gt;
&lt;br /&gt;
* Relatively low performance cost for each level. As mentioned in the video the the team successfully achieved a 6 to 10% performance overhead for each nesting level of OS.&lt;br /&gt;
&lt;br /&gt;
* Thanks to several optimizations the efficiency is greatly improvement to acceptable level&lt;br /&gt;
         - Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions&lt;br /&gt;
         - Optimizing exit handling code and consequently reducing number of exits.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The main drawback is efficiency which appears as the authors introduce additional level of abstraction. The everlasting memory/efficiency dispute continues as nesting virtualization enters our lives. The performance hit is mainly imposed by exponentially generated exits. Furthermore we observed that the paper performs tests at the L2 level, a guest with two hypervisors below it. It might have been useful to understand the limits of nesting if they investgated higher level of nesting such as L4 or L5, just to see what the effect is. Another significant detriment in the paper is that optimizations such as vmread and vm write avoidance are aimed at specific machines (e.i . Intel).&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
* lots of exits cause significant performance cost.&lt;br /&gt;
&lt;br /&gt;
* Writing, Organization wise:  Some concepts, such as the VMC&#039;s, are written such that you should already be familiar with how they work, or read the appropriate references for that section of the research project&lt;br /&gt;
&lt;br /&gt;
* From quickly looking over their results section, it seems their tests are done at the L2 level, a guest with two hypervisors below it. I think it might have been useful to understand the limits of nesting if they did some tests at an even higher level of nesting, L4 or L5 or whatever, just to see what the effect is. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;br /&gt;
[2]  INTEL CORPORATION. Intel 64 and IA-32 Architectures Software Developers Manual. 2009&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6606</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6606"/>
		<updated>2010-12-03T00:52:45Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other members should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
Just posted other bits that were missing in the backgrounds concepts section like the security uses, models of virtualization and para-virtualization. They&#039;re just a rough version however. I will edit them in the next few hours.I just need to write something for protection rings and that would be it I guess.&lt;br /&gt;
&lt;br /&gt;
I can help with the other sections for the rest of the day, I will try to post some summaries for performance and implementation or even the related work. --[[User:Hesperus|Hesperus]] 07:26, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Guys, we need to get moving here.. The contribution section still needs a lot. We need to talk about their innovations and the things they did there:&lt;br /&gt;
CPU virtualization, Memory virtualization, I/O virtualization and the Macro-optimizations.&lt;br /&gt;
&lt;br /&gt;
I will be posting something regarding this in the next few hours. --[[User:Hesperus|Hesperus]] 22:53, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
I have looked over the paper again and I am wondering about some things.  How are we to critique it?  By their methods, or by the paper itself?&lt;br /&gt;
I find that in the organization of the paper, they give you the links and extra information to look more in depth on such things like the VMC technology, but they almost use that as an excuse for not explaining things in the paper.&lt;br /&gt;
The VMC(0 -&amp;gt;1) annotation that isn&#039;t explained.  I understand what they mean, but it seems that they assume that you already know some things. --JSlonosky 03:03, 2 December 2010 (UTC)&lt;br /&gt;
-----------------&lt;br /&gt;
I think most research papers follow that kind of approach, they vaguely talk about the sideline things and provide references. The VMC technology from what I understood is just a creation of an environment to link or switch between hypervisors. --[[User:Hesperus|Hesperus]] 03:26, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
The instructions say that both style and content can be critiqued. I guess the organization of the paper would fall under style, but i&#039;m not sure how fair it is to critique how much they go in depth on certain things, especially some background stuff. After all, the audience of this paper is people who are already well versed in OS and virtualization stuff. That&#039;s not to say that we shouldn&#039;t bring it up, especially if we feel they don&#039;t sufficiently explain a new technique or notation they are using. &lt;br /&gt;
&lt;br /&gt;
I think it&#039;s also important to remember that our critique will contain things they have done well, not just things they could have done better. Considering that this paper got the best paper award at the largest OS conference, I think it&#039;s safe to say our critique will have many more good things than bad.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s some things they have done well on first inspection, just to get some ideas out there:&lt;br /&gt;
* Solution is extensible to an arbitrary nesting depth without major loss of performance&lt;br /&gt;
* Solution doesn&#039;t depend on modified hardware or software (expect for the lowest level hypervisor) we can reference previous solutions that do require modifications&lt;br /&gt;
* The paper doesn&#039;t ignore virtualizing I/O devices to an arbitrary nesting depth, other techniques do&lt;br /&gt;
* I think the paper does well in laying out the theoretical approach to the problem, as well as demonstrating impressive empirical results.&lt;br /&gt;
&lt;br /&gt;
I&#039;ll have some time to work on this tomorrow, probably clean up the research problem section, maybe kick off the contribution section if no one&#039;s started it, and put up some more extensive stuff for the critique. Let me know what you guys think, i&#039;m off to bed pretty soon, haha! --[[User:Mbingham|Mbingham]] 03:41, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
Okay, thanks for the clear up man. Sounds good.  I&#039;ll see what else I can do in between other work I got to do tonight.&lt;br /&gt;
One thing we should remember is to make sure that our essay clearly answers the question that is directed to it on the exam review.  If we get some other good ideas for questions, we should submit those to Anil as well.&lt;br /&gt;
Questions 1 and 2 relate to our essay, in my mind.&lt;br /&gt;
&amp;quot;What are two uses for nested virtual machines?&lt;br /&gt;
Multi-dimensional page tables are designed to avoid using shadow page tables in nested virtualization. What are shadow page tables, and when must they be used?&amp;quot;&lt;br /&gt;
--JSlonosky 04:47, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Hey guys. The points that Michael mentioned sound pretty great. I think the critique more or less depends on our understanding of the paper, so its not like theres a specific answer or something.&lt;br /&gt;
I will also be seeing the prof tomorrow in his office hours if anyone wants to join me, I will post something here before I go.&lt;br /&gt;
&lt;br /&gt;
The backgrounds section is done. I will keep editing it and filter some of the information. I don&#039;t have a lot of things to do today, so I will spend the whole day working on the paper and editing it and adding the references. I added some sub-sections for the contributions section. The theory part should just talk about the way they&#039;re flattening the levels of virtualization and multiplexing the hardware, I will try to write something for this. Then we go into the CPU, Memory, I/O and optimization. And I can see that someone already handled those things here in the discussion. So we&#039;re pretty much done. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;PS: Guys, please don;t forget about the references. We don&#039;t wanna get into any trouble with the prof in that regard.&#039;&#039;&#039;&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 08:51, 2 December 2010 (UTC)&lt;br /&gt;
------------------&lt;br /&gt;
Alright, I will do some Contribution section today or tonight so no worries. The critique, as i said i had added some stuff there but we still need to debate the good and bad of the design as perceived by our opinions since its a critique we can use first person. &amp;quot;I&amp;quot; and &amp;quot;To me&amp;quot;. --[[User:Praubic|Praubic]] 15:37, 2 December 2010 (UTC)&lt;br /&gt;
-------------------&lt;br /&gt;
Also, if each of us could contribute to the Critique part (here in the discussion) in point form and then we glue it together in concise sentences? We have to get straight to point. We are not aiming for length, rather content as you all know obviously. --[[User:Praubic|Praubic]] 15:53, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
Actually the contributions section is outlined below in the implementation here in the discussion page. So whoever did that should edit it and take it to the main page. I&#039;m ging to the office hours in 2 hours from now to ask the prof a couple of things incluing the critique. --[[User:Hesperus|Hesperus]] 15:58, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
I was just looking over the background concepts section, and had a couple of questions. Firstly, would it be possible to maybe scale the image down and have the text flow around it? Right now it seems to break the &amp;quot;flow&amp;quot; a bit, if that makes sense. Secondly, I think maybe we should think about consolidating some of the sub headings and stuff, I think it breaks the flow of the paper if we have a whole bunch of sub headings that only have a couple of sentences of explanation. Also, I added some stuff to the critique section on the talk page here (right at the bottom). I&#039;ll add some more later. Let me know what you guys think, and let us know how the meeting with Anil goes Hesperus. If I have time I&#039;ll try to come, but i&#039;ve got two other projects on the go right now too, haha. --[[User:Mbingham|Mbingham]] 16:56, 2 December 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
Honestly, I don&#039;t know how to scale down the picture and make the text flow through it but I will try later on tonight to resize it and make it smaller. Regarding the headings, yeah I can do that. I got sort of caught up with a lot of the terms and categorizations. I was even thinking about taking off the multiple-hardware support model, because its briefly mentioned in the paper and its not even available in x86 machines. I will ask the prof about those things. I will be seeing him in 30-40 minutes from now, his office hours start at 1:00 pm. Also if you guys notice any typos or misspellings, don&#039;t worry I will be editing the whole thing tonight. --[[User:Hesperus|Hesperus]] 17:36, 2 December 2010 (UTC)&lt;br /&gt;
--------------------------&lt;br /&gt;
Guys.. whoever did the implementation section below which is basically the contribution, should try to edit it and take it to the main page, I have already provided the headings for the contribution in the main page. I&#039;m currently working on the theory bit in that very same section. --[[User:Hesperus|Hesperus]] 17:43, 2 December 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&lt;br /&gt;
That was Csullvia.  I will go ahead and do it for him if he can&#039;t and has something else to do.--JSlonosky 18:32, 2 December 2010 (UTC)&lt;br /&gt;
-------&lt;br /&gt;
Ok, I didn&#039;t want to edit it myself, because I don&#039;t want to sound repetitive or redundant in my style. The prof should be locking the wiki sometime tomorrow at 7:00 am or 8:00 am, so we better get this finished tonight by 12 or something.&lt;br /&gt;
&lt;br /&gt;
I went and spoke with the prof in his office an hour ago. Regarding the critique, he pointed out a few things that I will be working on in the next few hours like the complexity of their design and whether it would remain efficent when applying multiple levels of virtualization. So I will write something on that, maybe we can combine our points into one paragraph or something.&lt;br /&gt;
&lt;br /&gt;
The headings, he said are fine. But he did mention that the article should make sense or be readable if we remove the headings or the section titles. I will be watching the discussion page frequently for comments and discussion. --[[User:Hesperus|Hesperus]] 20:10, 2 December 2010 (UTC)&lt;br /&gt;
--------&lt;br /&gt;
If you don&#039;t see any update until late at night, don&#039;t worry, I&#039;m coming back to do one final edit and grammar check for the whole article. &lt;br /&gt;
&#039;&#039;&#039;But please guys, if you have used any resources, then don&#039;t forget to add them&#039;&#039;&#039;. --[[User:Hesperus|Hesperus]] 21:31, 2 December 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
Cool. I see that Chris has added the contributions to the main page. I&#039;m currently adding the resources and will be adding a few other things later. --[[User:Hesperus|Hesperus]] 23:33, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Nice, nice I&#039;m currently working on Critique section. Anticipate updates, modify at will. --[[User:Praubic|Praubic]] 23:40, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
I&#039;m just writing out the good copy of another assignment, I should be done in about an hour and can work on whatever needs working on. --[[User:Mbingham|Mbingham]] 23:55, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work:&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but  they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts.  the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to  bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How they implement the Micro-Optimizations to make it go faster:&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a baremetal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only. They Optimized the transitions between L1 and L2. This  involves an exit to L0 and then an entry. In L0 the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if its being modified and they carefully balance full copying versus partial copying and tracking. The VMCS are optimized further by copying multiple VMCS fields at once. Normally by intel&#039;s specification read or writes must be performed using vmread and vmwrite instruction (operate on a single field). VMCS data can be accessed  without ill side-effects by bypassing&lt;br /&gt;
vmread and vmwrite and copying multiple fields at once with large memory copies. (might not work on processors other than the ones they have tested).The main cause of this slowdown exit handling are additional exits caused by&lt;br /&gt;
privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single l2 exit). By using AMD SVM the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
The paper unequivocally demonstrates strong input in the area of virtualization and data sharing within a single machine. It is aimed at programmers and does not affect the end-user in clearly detectable deviation regarding the usage of applications on top this architecture. Nevertheless contribution is visible with respect to security and compatibility. Since this is first successful implementations of this type that does not modify hardware (there have been half efficient designs), we expect to see increased interest in nested integration model described above.--[[User:Praubic|Praubic]] 23:37, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The framework makes for convenient testing and debugging due to the fact that hypervisors can function inconspicuously towards other nested hypervisors and VMs without being detected. Moreover the efficiency overhead is reduced to 6-10% per level thanks to optimizations such as ommited vmwrites and direct paging (multi level paging technique). &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE&#039;&#039;&#039;: They do mention a masters thesis by Berghmans (its citation 12 in the paper) that, if I understand it right, also talks about software only nested virtualization (they mention it in section 2 as well as in the video), but they claim it is inefficient because only the lowest level hypervisor is able to take advantage of hardware with virtualization support. In the turtles porject solution all levels of hypervisor can take advantage of any present virtualization support. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
* Writing, organization wise: They provide links and resources that can help give explanations to the concepts that they briefly touch upon&lt;br /&gt;
&lt;br /&gt;
* Relatively low performance cost for each level. As mentioned in the video the the team successfully achieved a 6 to 10% performance overhead for each nesting level of OS.&lt;br /&gt;
&lt;br /&gt;
* Thanks to several optimizations the efficiency is greatly improvement to acceptable level&lt;br /&gt;
         - Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions&lt;br /&gt;
         - Optimizing exit handling code and consequently reducing number of exits.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The main drawback is efficiency which appears as the authors introduce additional level of abstraction. The everlasting memory/efficiency dispute continues as nesting virtualization enters our lives. The performance hit is mainly imposed by exponentially generated exits. Furthermore we observed that the paper performs tests at the L2 level, a guest with two hypervisors below it. It might have been useful to understand the limits of nesting if they investgated higher level of nesting such as L4 or L5, just to see what the effect is. Another significant detriment in the paper is that optimizations such as vmread and vm write avoidance are aimed at specific machines (e.i . Intel).&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
* lots of exits cause significant performance cost.&lt;br /&gt;
&lt;br /&gt;
* Writing, Organization wise:  Some concepts, such as the VMC&#039;s, are written such that you should already be familiar with how they work, or read the appropriate references for that section of the research project&lt;br /&gt;
&lt;br /&gt;
* From quickly looking over their results section, it seems their tests are done at the L2 level, a guest with two hypervisors below it. I think it might have been useful to understand the limits of nesting if they did some tests at an even higher level of nesting, L4 or L5 or whatever, just to see what the effect is. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;br /&gt;
[2]  INTEL CORPORATION. Intel 64 and IA-32 Architectures Software Developers Manual. 2009&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6478</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6478"/>
		<updated>2010-12-02T18:38:08Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* General discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other members should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
Just posted other bits that were missing in the backgrounds concepts section like the security uses, models of virtualization and para-virtualization. They&#039;re just a rough version however. I will edit them in the next few hours.I just need to write something for protection rings and that would be it I guess.&lt;br /&gt;
&lt;br /&gt;
I can help with the other sections for the rest of the day, I will try to post some summaries for performance and implementation or even the related work. --[[User:Hesperus|Hesperus]] 07:26, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Guys, we need to get moving here.. The contribution section still needs a lot. We need to talk about their innovations and the things they did there:&lt;br /&gt;
CPU virtualization, Memory virtualization, I/O virtualization and the Macro-optimizations.&lt;br /&gt;
&lt;br /&gt;
I will be posting something regarding this in the next few hours. --[[User:Hesperus|Hesperus]] 22:53, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
I have looked over the paper again and I am wondering about some things.  How are we to critique it?  By their methods, or by the paper itself?&lt;br /&gt;
I find that in the organization of the paper, they give you the links and extra information to look more in depth on such things like the VMC technology, but they almost use that as an excuse for not explaining things in the paper.&lt;br /&gt;
The VMC(0 -&amp;gt;1) annotation that isn&#039;t explained.  I understand what they mean, but it seems that they assume that you already know some things. --JSlonosky 03:03, 2 December 2010 (UTC)&lt;br /&gt;
-----------------&lt;br /&gt;
I think most research papers follow that kind of approach, they vaguely talk about the sideline things and provide references. The VMC technology from what I understood is just a creation of an environment to link or switch between hypervisors. --[[User:Hesperus|Hesperus]] 03:26, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
The instructions say that both style and content can be critiqued. I guess the organization of the paper would fall under style, but i&#039;m not sure how fair it is to critique how much they go in depth on certain things, especially some background stuff. After all, the audience of this paper is people who are already well versed in OS and virtualization stuff. That&#039;s not to say that we shouldn&#039;t bring it up, especially if we feel they don&#039;t sufficiently explain a new technique or notation they are using. &lt;br /&gt;
&lt;br /&gt;
I think it&#039;s also important to remember that our critique will contain things they have done well, not just things they could have done better. Considering that this paper got the best paper award at the largest OS conference, I think it&#039;s safe to say our critique will have many more good things than bad.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s some things they have done well on first inspection, just to get some ideas out there:&lt;br /&gt;
* Solution is extensible to an arbitrary nesting depth without major loss of performance&lt;br /&gt;
* Solution doesn&#039;t depend on modified hardware or software (expect for the lowest level hypervisor) we can reference previous solutions that do require modifications&lt;br /&gt;
* The paper doesn&#039;t ignore virtualizing I/O devices to an arbitrary nesting depth, other techniques do&lt;br /&gt;
* I think the paper does well in laying out the theoretical approach to the problem, as well as demonstrating impressive empirical results.&lt;br /&gt;
&lt;br /&gt;
I&#039;ll have some time to work on this tomorrow, probably clean up the research problem section, maybe kick off the contribution section if no one&#039;s started it, and put up some more extensive stuff for the critique. Let me know what you guys think, i&#039;m off to bed pretty soon, haha! --[[User:Mbingham|Mbingham]] 03:41, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
Okay, thanks for the clear up man. Sounds good.  I&#039;ll see what else I can do in between other work I got to do tonight.&lt;br /&gt;
One thing we should remember is to make sure that our essay clearly answers the question that is directed to it on the exam review.  If we get some other good ideas for questions, we should submit those to Anil as well.&lt;br /&gt;
Questions 1 and 2 relate to our essay, in my mind.&lt;br /&gt;
&amp;quot;What are two uses for nested virtual machines?&lt;br /&gt;
Multi-dimensional page tables are designed to avoid using shadow page tables in nested virtualization. What are shadow page tables, and when must they be used?&amp;quot;&lt;br /&gt;
--JSlonosky 04:47, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Hey guys. The points that Michael mentioned sound pretty great. I think the critique more or less depends on our understanding of the paper, so its not like theres a specific answer or something.&lt;br /&gt;
I will also be seeing the prof tomorrow in his office hours if anyone wants to join me, I will post something here before I go.&lt;br /&gt;
&lt;br /&gt;
The backgrounds section is done. I will keep editing it and filter some of the information. I don&#039;t have a lot of things to do today, so I will spend the whole day working on the paper and editing it and adding the references. I added some sub-sections for the contributions section. The theory part should just talk about the way they&#039;re flattening the levels of virtualization and multiplexing the hardware, I will try to write something for this. Then we go into the CPU, Memory, I/O and optimization. And I can see that someone already handled those things here in the discussion. So we&#039;re pretty much done. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;PS: Guys, please don;t forget about the references. We don&#039;t wanna get into any trouble with the prof in that regard.&#039;&#039;&#039;&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 08:51, 2 December 2010 (UTC)&lt;br /&gt;
------------------&lt;br /&gt;
Alright, I will do some Contribution section today or tonight so no worries. The critique, as i said i had added some stuff there but we still need to debate the good and bad of the design as perceived by our opinions since its a critique we can use first person. &amp;quot;I&amp;quot; and &amp;quot;To me&amp;quot;. --[[User:Praubic|Praubic]] 15:37, 2 December 2010 (UTC)&lt;br /&gt;
-------------------&lt;br /&gt;
Also, if each of us could contribute to the Critique part (here in the discussion) in point form and then we glue it together in concise sentences? We have to get straight to point. We are not aiming for length, rather content as you all know obviously. --[[User:Praubic|Praubic]] 15:53, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
Actually the contributions section is outlined below in the implementation here in the discussion page. So whoever did that should edit it and take it to the main page. I&#039;m ging to the office hours in 2 hours from now to ask the prof a couple of things incluing the critique. --[[User:Hesperus|Hesperus]] 15:58, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
I was just looking over the background concepts section, and had a couple of questions. Firstly, would it be possible to maybe scale the image down and have the text flow around it? Right now it seems to break the &amp;quot;flow&amp;quot; a bit, if that makes sense. Secondly, I think maybe we should think about consolidating some of the sub headings and stuff, I think it breaks the flow of the paper if we have a whole bunch of sub headings that only have a couple of sentences of explanation. Also, I added some stuff to the critique section on the talk page here (right at the bottom). I&#039;ll add some more later. Let me know what you guys think, and let us know how the meeting with Anil goes Hesperus. If I have time I&#039;ll try to come, but i&#039;ve got two other projects on the go right now too, haha. --[[User:Mbingham|Mbingham]] 16:56, 2 December 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
Honestly, I don&#039;t know how to scale down the picture and make the text flow through it but I will try later on tonight to resize it and make it smaller. Regarding the headings, yeah I can do that. I got sort of caught up with a lot of the terms and categorizations. I was even thinking about taking off the multiple-hardware support model, because its briefly mentioned in the paper and its not even available in x86 machines. I will ask the prof about those things. I will be seeing him in 30-40 minutes from now, his office hours start at 1:00 pm. Also if you guys notice any typos or misspellings, don&#039;t worry I will be editing the whole thing tonight. --[[User:Hesperus|Hesperus]] 17:36, 2 December 2010 (UTC)&lt;br /&gt;
--------------------------&lt;br /&gt;
Guys.. whoever did the implementation section below which is basically the contribution, should try to edit it and take it to the main page, I have already provided the headings for the contribution in the main page. I&#039;m currently working on the theory bit in that very same section. --[[User:Hesperus|Hesperus]] 17:43, 2 December 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&lt;br /&gt;
That was Csullvia.  I will go ahead and do it for him if he can&#039;t and has something else to do.--JSlonosky 18:32, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work:&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but  they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts.  the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to  bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How they implement the Micro-Optimizations to make it go faster:&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a baremetal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only. They Optimized the transitions between L1 and L2. This  involves an exit to L0 and then an entry. In L0 the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if its being modified and they carefully balance full copying versus partial copying and tracking. The VMCS are optimized further by copying multiple VMCS fields at once. Normally by intel&#039;s specification read or writes must be performed using vmread and vmwrite instruction (operate on a single field). VMCS data can be accessed  without ill side-effects by bypassing&lt;br /&gt;
vmread and vmwrite and copying multiple fields at once with large memory copies. (might not work on processors other than the ones they have tested).The main cause of this slowdown exit handling are additional exits caused by&lt;br /&gt;
privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single l2 exit). By using AMD SVM the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE&#039;&#039;&#039;: They do mention a masters thesis by Berghmans (its citation 12 in the paper) that, if I understand it right, also talks about software only nested virtualization (they mention it in section 2 as well as in the video), but they claim it is inefficient because only the lowest level hypervisor is able to take advantage of hardware with virtualization support. In the turtles porject solution all levels of hypervisor can take advantage of any present virtualization support. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
*Writing, organization wise: They provide links and resources that can help give explanations to the concepts that they briefly touch upon&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
*Writing, Organization wise:  Some concepts, such as the VMC&#039;s, are written such that you should already be familiar with how they work, or read the appropriate references for that section of the research project&lt;br /&gt;
&lt;br /&gt;
* From quickly looking over their results section, it seems their tests are done at the L2 level, a guest with two hypervisors below it. I think it might have been useful to understand the limits of nesting if they did some tests at an even higher level of nesting, L4 or L5 or whatever, just to see what the effect is. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6471</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6471"/>
		<updated>2010-12-02T18:32:33Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* General discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other members should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
Just posted other bits that were missing in the backgrounds concepts section like the security uses, models of virtualization and para-virtualization. They&#039;re just a rough version however. I will edit them in the next few hours.I just need to write something for protection rings and that would be it I guess.&lt;br /&gt;
&lt;br /&gt;
I can help with the other sections for the rest of the day, I will try to post some summaries for performance and implementation or even the related work. --[[User:Hesperus|Hesperus]] 07:26, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Guys, we need to get moving here.. The contribution section still needs a lot. We need to talk about their innovations and the things they did there:&lt;br /&gt;
CPU virtualization, Memory virtualization, I/O virtualization and the Macro-optimizations.&lt;br /&gt;
&lt;br /&gt;
I will be posting something regarding this in the next few hours. --[[User:Hesperus|Hesperus]] 22:53, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
I have looked over the paper again and I am wondering about some things.  How are we to critique it?  By their methods, or by the paper itself?&lt;br /&gt;
I find that in the organization of the paper, they give you the links and extra information to look more in depth on such things like the VMC technology, but they almost use that as an excuse for not explaining things in the paper.&lt;br /&gt;
The VMC(0 -&amp;gt;1) annotation that isn&#039;t explained.  I understand what they mean, but it seems that they assume that you already know some things. --JSlonosky 03:03, 2 December 2010 (UTC)&lt;br /&gt;
-----------------&lt;br /&gt;
I think most research papers follow that kind of approach, they vaguely talk about the sideline things and provide references. The VMC technology from what I understood is just a creation of an environment to link or switch between hypervisors. --[[User:Hesperus|Hesperus]] 03:26, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
The instructions say that both style and content can be critiqued. I guess the organization of the paper would fall under style, but i&#039;m not sure how fair it is to critique how much they go in depth on certain things, especially some background stuff. After all, the audience of this paper is people who are already well versed in OS and virtualization stuff. That&#039;s not to say that we shouldn&#039;t bring it up, especially if we feel they don&#039;t sufficiently explain a new technique or notation they are using. &lt;br /&gt;
&lt;br /&gt;
I think it&#039;s also important to remember that our critique will contain things they have done well, not just things they could have done better. Considering that this paper got the best paper award at the largest OS conference, I think it&#039;s safe to say our critique will have many more good things than bad.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s some things they have done well on first inspection, just to get some ideas out there:&lt;br /&gt;
* Solution is extensible to an arbitrary nesting depth without major loss of performance&lt;br /&gt;
* Solution doesn&#039;t depend on modified hardware or software (expect for the lowest level hypervisor) we can reference previous solutions that do require modifications&lt;br /&gt;
* The paper doesn&#039;t ignore virtualizing I/O devices to an arbitrary nesting depth, other techniques do&lt;br /&gt;
* I think the paper does well in laying out the theoretical approach to the problem, as well as demonstrating impressive empirical results.&lt;br /&gt;
&lt;br /&gt;
I&#039;ll have some time to work on this tomorrow, probably clean up the research problem section, maybe kick off the contribution section if no one&#039;s started it, and put up some more extensive stuff for the critique. Let me know what you guys think, i&#039;m off to bed pretty soon, haha! --[[User:Mbingham|Mbingham]] 03:41, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
Okay, thanks for the clear up man. Sounds good.  I&#039;ll see what else I can do in between other work I got to do tonight.&lt;br /&gt;
One thing we should remember is to make sure that our essay clearly answers the question that is directed to it on the exam review.  If we get some other good ideas for questions, we should submit those to Anil as well.&lt;br /&gt;
Questions 1 and 2 relate to our essay, in my mind.&lt;br /&gt;
&amp;quot;What are two uses for nested virtual machines?&lt;br /&gt;
Multi-dimensional page tables are designed to avoid using shadow page tables in nested virtualization. What are shadow page tables, and when must they be used?&amp;quot;&lt;br /&gt;
--JSlonosky 04:47, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Hey guys. The points that Michael mentioned sound pretty great. I think the critique more or less depends on our understanding of the paper, so its not like theres a specific answer or something.&lt;br /&gt;
I will also be seeing the prof tomorrow in his office hours if anyone wants to join me, I will post something here before I go.&lt;br /&gt;
&lt;br /&gt;
The backgrounds section is done. I will keep editing it and filter some of the information. I don&#039;t have a lot of things to do today, so I will spend the whole day working on the paper and editing it and adding the references. I added some sub-sections for the contributions section. The theory part should just talk about the way they&#039;re flattening the levels of virtualization and multiplexing the hardware, I will try to write something for this. Then we go into the CPU, Memory, I/O and optimization. And I can see that someone already handled those things here in the discussion. So we&#039;re pretty much done. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;PS: Guys, please don;t forget about the references. We don&#039;t wanna get into any trouble with the prof in that regard.&#039;&#039;&#039;&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 08:51, 2 December 2010 (UTC)&lt;br /&gt;
------------------&lt;br /&gt;
Alright, I will do some Contribution section today or tonight so no worries. The critique, as i said i had added some stuff there but we still need to debate the good and bad of the design as perceived by our opinions since its a critique we can use first person. &amp;quot;I&amp;quot; and &amp;quot;To me&amp;quot;. --[[User:Praubic|Praubic]] 15:37, 2 December 2010 (UTC)&lt;br /&gt;
-------------------&lt;br /&gt;
Also, if each of us could contribute to the Critique part (here in the discussion) in point form and then we glue it together in concise sentences? We have to get straight to point. We are not aiming for length, rather content as you all know obviously. --[[User:Praubic|Praubic]] 15:53, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
Actually the contributions section is outlined below in the implementation here in the discussion page. So whoever did that should edit it and take it to the main page. I&#039;m ging to the office hours in 2 hours from now to ask the prof a couple of things incluing the critique. --[[User:Hesperus|Hesperus]] 15:58, 2 December 2010 (UTC)&lt;br /&gt;
--------------------&lt;br /&gt;
I was just looking over the background concepts section, and had a couple of questions. Firstly, would it be possible to maybe scale the image down and have the text flow around it? Right now it seems to break the &amp;quot;flow&amp;quot; a bit, if that makes sense. Secondly, I think maybe we should think about consolidating some of the sub headings and stuff, I think it breaks the flow of the paper if we have a whole bunch of sub headings that only have a couple of sentences of explanation. Also, I added some stuff to the critique section on the talk page here (right at the bottom). I&#039;ll add some more later. Let me know what you guys think, and let us know how the meeting with Anil goes Hesperus. If I have time I&#039;ll try to come, but i&#039;ve got two other projects on the go right now too, haha. --[[User:Mbingham|Mbingham]] 16:56, 2 December 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
Honestly, I don&#039;t know how to scale down the picture and make the text flow through it but I will try later on tonight to resize it and make it smaller. Regarding the headings, yeah I can do that. I got sort of caught up with a lot of the terms and categorizations. I was even thinking about taking off the multiple-hardware support model, because its briefly mentioned in the paper and its not even available in x86 machines. I will ask the prof about those things. I will be seeing him in 30-40 minutes from now, his office hours start at 1:00 pm. Also if you guys notice any typos or misspellings, don&#039;t worry I will be editing the whole thing tonight. --[[User:Hesperus|Hesperus]] 17:36, 2 December 2010 (UTC)&lt;br /&gt;
--------------------------&lt;br /&gt;
Guys.. whoever did the implementation section below which is basically the contribution, should try to edit it and take it to the main page, I have already provided the headings for the contribution in the main page. I&#039;m currently working on the theory bit in that very same section. --[[User:Hesperus|Hesperus]] 17:43, 2 December 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&lt;br /&gt;
That was Csullvia.  I will go ahead and do it for him if he has anything it to do.--JSlonosky 18:32, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work:&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but  they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts.  the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to  bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How they implement the Micro-Optimizations to make it go faster:&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a baremetal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only. They Optimized the transitions between L1 and L2. This  involves an exit to L0 and then an entry. In L0 the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if its being modified and they carefully balance full copying versus partial copying and tracking. The VMCS are optimized further by copying multiple VMCS fields at once. Normally by intel&#039;s specification read or writes must be performed using vmread and vmwrite instruction (operate on a single field). VMCS data can be accessed  without ill side-effects by bypassing&lt;br /&gt;
vmread and vmwrite and copying multiple fields at once with large memory copies. (might not work on processors other than the ones they have tested).The main cause of this slowdown exit handling are additional exits caused by&lt;br /&gt;
privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single l2 exit). By using AMD SVM the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE&#039;&#039;&#039;: They do mention a masters thesis by Berghmans (its citation 12 in the paper) that, if I understand it right, also talks about software only nested virtualization (they mention it in section 2 as well as in the video), but they claim it is inefficient because only the lowest level hypervisor is able to take advantage of hardware with virtualization support. In the turtles porject solution all levels of hypervisor can take advantage of any present virtualization support. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
*Writing, organization wise: They provide links and resources that can help give explanations to the concepts that they briefly touch upon&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
*Writing, Organization wise:  Some concepts, such as the VMC&#039;s, are written such that you should already be familiar with how they work, or read the appropriate references for that section of the research project&lt;br /&gt;
&lt;br /&gt;
* From quickly looking over their results section, it seems their tests are done at the L2 level, a guest with two hypervisors below it. I think it might have been useful to understand the limits of nesting if they did some tests at an even higher level of nesting, L4 or L5 or whatever, just to see what the effect is. --[[User:Mbingham|Mbingham]] 16:21, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6186</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6186"/>
		<updated>2010-12-02T04:47:25Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* General discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other members should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
Just posted other bits that were missing in the backgrounds concepts section like the security uses, models of virtualization and para-virtualization. They&#039;re just a rough version however. I will edit them in the next few hours.I just need to write something for protection rings and that would be it I guess.&lt;br /&gt;
&lt;br /&gt;
I can help with the other sections for the rest of the day, I will try to post some summaries for performance and implementation or even the related work. --[[User:Hesperus|Hesperus]] 07:26, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Guys, we need to get moving here.. The contribution section still needs a lot. We need to talk about their innovations and the things they did there:&lt;br /&gt;
CPU virtualization, Memory virtualization, I/O virtualization and the Macro-optimizations.&lt;br /&gt;
&lt;br /&gt;
I will be posting something regarding this in the next few hours. --[[User:Hesperus|Hesperus]] 22:53, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
I have looked over the paper again and I am wondering about some things.  How are we to critique it?  By their methods, or by the paper itself?&lt;br /&gt;
I find that in the organization of the paper, they give you the links and extra information to look more in depth on such things like the VMC technology, but they almost use that as an excuse for not explaining things in the paper.&lt;br /&gt;
The VMC(0 -&amp;gt;1) annotation that isn&#039;t explained.  I understand what they mean, but it seems that they assume that you already know some things. --JSlonosky 03:03, 2 December 2010 (UTC)&lt;br /&gt;
-----------------&lt;br /&gt;
I think most research papers follow that kind of approach, they vaguely talk about the sideline things and provide references. The VMC technology from what I understood is just a creation of an environment to link or switch between hypervisors. --[[User:Hesperus|Hesperus]] 03:26, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
The instructions say that both style and content can be critiqued. I guess the organization of the paper would fall under style, but i&#039;m not sure how fair it is to critique how much they go in depth on certain things, especially some background stuff. After all, the audience of this paper is people who are already well versed in OS and virtualization stuff. That&#039;s not to say that we shouldn&#039;t bring it up, especially if we feel they don&#039;t sufficiently explain a new technique or notation they are using. &lt;br /&gt;
&lt;br /&gt;
I think it&#039;s also important to remember that our critique will contain things they have done well, not just things they could have done better. Considering that this paper got the best paper award at the largest OS conference, I think it&#039;s safe to say our critique will have many more good things than bad.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s some things they have done well on first inspection, just to get some ideas out there:&lt;br /&gt;
* Solution is extensible to an arbitrary nesting depth without major loss of performance&lt;br /&gt;
* Solution doesn&#039;t depend on modified hardware or software (expect for the lowest level hypervisor) we can reference previous solutions that do require modifications&lt;br /&gt;
* The paper doesn&#039;t ignore virtualizing I/O devices to an arbitrary nesting depth, other techniques do&lt;br /&gt;
* I think the paper does well in laying out the theoretical approach to the problem, as well as demonstrating impressive empirical results.&lt;br /&gt;
&lt;br /&gt;
I&#039;ll have some time to work on this tomorrow, probably clean up the research problem section, maybe kick off the contribution section if no one&#039;s started it, and put up some more extensive stuff for the critique. Let me know what you guys think, i&#039;m off to bed pretty soon, haha! --[[User:Mbingham|Mbingham]] 03:41, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
Okay, thanks for the clear up man. Sounds good.  I&#039;ll see what else I can do in between other work I got to do tonight.&lt;br /&gt;
One thing we should remember is to make sure that our essay clearly answers the question that is directed to it on the exam review.  If we get some other good ideas for questions, we should submit those to Anil as well.&lt;br /&gt;
Questions 1 and 2 relate to our essay, in my mind.&lt;br /&gt;
&amp;quot;What are two uses for nested virtual machines?&lt;br /&gt;
Multi-dimensional page tables are designed to avoid using shadow page tables in nested virtualization. What are shadow page tables, and when must they be used?&amp;quot;&lt;br /&gt;
--JSlonosky 04:47, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work:&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but  they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts.  the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to  bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How they implement the Micro-Optimizations to make it go faster:&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a baremetal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only. They Optimized the transitions between L1 and L2. This  involves an exit to L0 and then an entry. In L0 the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if its being modified and they carefully balance full copying versus partial copying and tracking. The VMCS are optimized further by copying multiple VMCS fields at once. Normally by intel&#039;s specification read or writes must be performed using vmread and vmwrite instruction (operate on a single field). VMCS data can be accessed  without ill side-effects by bypassing&lt;br /&gt;
vmread and vmwrite and copying multiple fields at once with large memory copies. (might not work on processors other than the ones they have tested).The main cause of this slowdown exit handling are additional exits caused by&lt;br /&gt;
privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single l2 exit). By using AMD SVM the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
*Writing, organization wise: They provide links and resources that can help give explanations to the concepts that they briefly touch upon&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
*Writing, Organization wise:  Some concepts, such as the VMC&#039;s, are written such that you should already be familiar with how they work, or read the appropriate references for that section of the research project&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6185</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6185"/>
		<updated>2010-12-02T04:47:01Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* General discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other members should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
Just posted other bits that were missing in the backgrounds concepts section like the security uses, models of virtualization and para-virtualization. They&#039;re just a rough version however. I will edit them in the next few hours.I just need to write something for protection rings and that would be it I guess.&lt;br /&gt;
&lt;br /&gt;
I can help with the other sections for the rest of the day, I will try to post some summaries for performance and implementation or even the related work. --[[User:Hesperus|Hesperus]] 07:26, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Guys, we need to get moving here.. The contribution section still needs a lot. We need to talk about their innovations and the things they did there:&lt;br /&gt;
CPU virtualization, Memory virtualization, I/O virtualization and the Macro-optimizations.&lt;br /&gt;
&lt;br /&gt;
I will be posting something regarding this in the next few hours. --[[User:Hesperus|Hesperus]] 22:53, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
I have looked over the paper again and I am wondering about some things.  How are we to critique it?  By their methods, or by the paper itself?&lt;br /&gt;
I find that in the organization of the paper, they give you the links and extra information to look more in depth on such things like the VMC technology, but they almost use that as an excuse for not explaining things in the paper.&lt;br /&gt;
The VMC(0 -&amp;gt;1) annotation that isn&#039;t explained.  I understand what they mean, but it seems that they assume that you already know some things. --JSlonosky 03:03, 2 December 2010 (UTC)&lt;br /&gt;
-----------------&lt;br /&gt;
I think most research papers follow that kind of approach, they vaguely talk about the sideline things and provide references. The VMC technology from what I understood is just a creation of an environment to link or switch between hypervisors. --[[User:Hesperus|Hesperus]] 03:26, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
The instructions say that both style and content can be critiqued. I guess the organization of the paper would fall under style, but i&#039;m not sure how fair it is to critique how much they go in depth on certain things, especially some background stuff. After all, the audience of this paper is people who are already well versed in OS and virtualization stuff. That&#039;s not to say that we shouldn&#039;t bring it up, especially if we feel they don&#039;t sufficiently explain a new technique or notation they are using. &lt;br /&gt;
&lt;br /&gt;
I think it&#039;s also important to remember that our critique will contain things they have done well, not just things they could have done better. Considering that this paper got the best paper award at the largest OS conference, I think it&#039;s safe to say our critique will have many more good things than bad.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s some things they have done well on first inspection, just to get some ideas out there:&lt;br /&gt;
* Solution is extensible to an arbitrary nesting depth without major loss of performance&lt;br /&gt;
* Solution doesn&#039;t depend on modified hardware or software (expect for the lowest level hypervisor) we can reference previous solutions that do require modifications&lt;br /&gt;
* The paper doesn&#039;t ignore virtualizing I/O devices to an arbitrary nesting depth, other techniques do&lt;br /&gt;
* I think the paper does well in laying out the theoretical approach to the problem, as well as demonstrating impressive empirical results.&lt;br /&gt;
&lt;br /&gt;
I&#039;ll have some time to work on this tomorrow, probably clean up the research problem section, maybe kick off the contribution section if no one&#039;s started it, and put up some more extensive stuff for the critique. Let me know what you guys think, i&#039;m off to bed pretty soon, haha! --[[User:Mbingham|Mbingham]] 03:41, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
Okay, thanks for the clear up man. Sounds good.  I&#039;ll see what else I can do in between other work I got to do tonight.&lt;br /&gt;
One thing we should remember is to make sure that our essay clearly answers the question that is directed to it on the exam review.  If we get some other good ideas for questions, we should submit those to Anil as well.&lt;br /&gt;
 Questions 1 and 2 relate to our essay, in my mind.&lt;br /&gt;
&amp;quot;What are two uses for nested virtual machines?&lt;br /&gt;
Multi-dimensional page tables are designed to avoid using shadow page tables in nested virtualization. What are shadow page tables, and when must they be used?&amp;quot;&lt;br /&gt;
--JSlonosky 04:47, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work:&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but  they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts.  the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to  bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How they implement the Micro-Optimizations to make it go faster:&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a baremetal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only. They Optimized the transitions between L1 and L2. This  involves an exit to L0 and then an entry. In L0 the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if its being modified and they carefully balance full copying versus partial copying and tracking. The VMCS are optimized further by copying multiple VMCS fields at once. Normally by intel&#039;s specification read or writes must be performed using vmread and vmwrite instruction (operate on a single field). VMCS data can be accessed  without ill side-effects by bypassing&lt;br /&gt;
vmread and vmwrite and copying multiple fields at once with large memory copies. (might not work on processors other than the ones they have tested).The main cause of this slowdown exit handling are additional exits caused by&lt;br /&gt;
privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single l2 exit). By using AMD SVM the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
*Writing, organization wise: They provide links and resources that can help give explanations to the concepts that they briefly touch upon&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
*Writing, Organization wise:  Some concepts, such as the VMC&#039;s, are written such that you should already be familiar with how they work, or read the appropriate references for that section of the research project&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6132</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6132"/>
		<updated>2010-12-02T03:27:12Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* Critique */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other members should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
Just posted other bits that were missing in the backgrounds concepts section like the security uses, models of virtualization and para-virtualization. They&#039;re just a rough version however. I will edit them in the next few hours.I just need to write something for protection rings and that would be it I guess.&lt;br /&gt;
&lt;br /&gt;
I can help with the other sections for the rest of the day, I will try to post some summaries for performance and implementation or even the related work. --[[User:Hesperus|Hesperus]] 07:26, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Guys, we need to get moving here.. The contribution section still needs a lot. We need to talk about their innovations and the things they did there:&lt;br /&gt;
CPU virtualization, Memory virtualization, I/O virtualization and the Macro-optimizations.&lt;br /&gt;
&lt;br /&gt;
I will be posting something regarding this in the next few hours. --[[User:Hesperus|Hesperus]] 22:53, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
I have looked over the paper again and I am wondering about some things.  How are we to critique it?  By their methods, or by the paper itself?&lt;br /&gt;
I find that in the organization of the paper, they give you the links and extra information to look more in depth on such things like the VMC technology, but they almost use that as an excuse for not explaining things in the paper.&lt;br /&gt;
The VMC(0 -&amp;gt;1) annotation that isn&#039;t explained.  I understand what they mean, but it seems that they assume that you already know some things. --JSlonosky 03:03, 2 December 2010 (UTC)&lt;br /&gt;
-----------------&lt;br /&gt;
I think most research papers follow that kind of approach, they vaguely talk about the sideline things and provide references. The VMC technology from what I understood is just a creation of an environment to link or switch between hypervisors. --[[User:Hesperus|Hesperus]] 03:26, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work:&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but  they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts.  the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to  bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How they implement the Micro-Optimizations to make it go faster:&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a baremetal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only. They Optimized the transitions between L1 and L2. This  involves an exit to L0 and then an entry. In L0 the most time is spent in merging VMC&#039;s, so they optimize this by copying data between VMC&#039;s if its being modified and they carefully balance full copying versus partial copying and tracking. The VMCS are optimized further by copying multiple VMCS fields at once. Normally by intel&#039;s specification read or writes must be performed using vmread and vmwrite instruction (operate on a single field). VMCS data can be accessed  without ill side-effects by bypassing&lt;br /&gt;
vmread and vmwrite and copying multiple fields at once with large memory copies. (might not work on processors other than the ones they have tested).The main cause of this slowdown exit handling are additional exits caused by&lt;br /&gt;
privileged instructions in the exit-handling code. vmread and vmwrite are used by the hypervisor to change the guest and host specification ( causing L1 exit multiple times while it handles a single l2 exit). By using AMD SVM the guest and host specifications can be read or written to directly using ordinary memory loads and stores( L0 does not intervene while L1 modifies L2 specifications).&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
*Writing, organization wise: They provide links and resources that can help give explanations to the concepts that they briefly touch upon&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
*Writing, Organization wise:  Some concepts, such as the VMC&#039;s, are written such that you should already be familiar with how they work, or read the appropriate references for that section of the research project&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6102</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6102"/>
		<updated>2010-12-02T03:03:58Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* General discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other members should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
Just posted other bits that were missing in the backgrounds concepts section like the security uses, models of virtualization and para-virtualization. They&#039;re just a rough version however. I will edit them in the next few hours.I just need to write something for protection rings and that would be it I guess.&lt;br /&gt;
&lt;br /&gt;
I can help with the other sections for the rest of the day, I will try to post some summaries for performance and implementation or even the related work. --[[User:Hesperus|Hesperus]] 07:26, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Guys, we need to get moving here.. The contribution section still needs a lot. We need to talk about their innovations and the things they did there:&lt;br /&gt;
CPU virtualization, Memory virtualization, I/O virtualization and the Macro-optimizations.&lt;br /&gt;
&lt;br /&gt;
I will be posting something regarding this in the next few hours. --[[User:Hesperus|Hesperus]] 22:53, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
&lt;br /&gt;
I have looked over the paper again and I am wondering about some things.  How are we to critique it?  By their methods, or by the paper itself?&lt;br /&gt;
I find that in the organization of the paper, they give you the links and extra information to look more in depth on such things like the VMC technology, but they almost use that as an excuse for not explaining things in the paper.&lt;br /&gt;
The VMC(0 -&amp;gt;1) annotation that isn&#039;t explained.  I understand what they mean, but it seems that they assume that you already know some things. --JSlonosky 03:03, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work:&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but  they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts.  the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to  bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How they implement the Micro-Optimizations to make it go faster:&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a baremetal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only.&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6101</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=6101"/>
		<updated>2010-12-02T03:03:21Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* General discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Yea really, great work on the Background. It&#039;s looking slick. I added some initial edit in the Contribution and Critique but I agree lets open a thread here and All collaborate. --[[User:Praubic|Praubic]] 18:24, 30 November 2010 (UTC)&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
I put up an initial version of the research problem section in the article. Let me know what you guys think. --[[User:Mbingham|Mbingham]] 19:53, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
Hey guys. Since I&#039;m working on the backgrounds concepts and Michael is handling the research problem. The other members should handle the contribution part. I think everything we need for the contribution section is in section 3 of the article (3.1, 3.2, 3.3, 3.4, 3.5). You can also make use of the things we posted here. Just to be on the safe side, we need to get this done by tomorrow&#039;s night. I&#039;m working on a couple of definitions as we speak and will hopefully be done by tomorrow&#039;s morning.&lt;br /&gt;
&lt;br /&gt;
PS: We should leave the critique to the end, there should not be a lot of writing for that part and we must all contribute.&lt;br /&gt;
&lt;br /&gt;
--[[User:Hesperus|Hesperus]] 01:45, 1 December 2010 (UTC)&lt;br /&gt;
-----------------------------&lt;br /&gt;
Just posted other bits that were missing in the backgrounds concepts section like the security uses, models of virtualization and para-virtualization. They&#039;re just a rough version however. I will edit them in the next few hours.I just need to write something for protection rings and that would be it I guess.&lt;br /&gt;
&lt;br /&gt;
I can help with the other sections for the rest of the day, I will try to post some summaries for performance and implementation or even the related work. --[[User:Hesperus|Hesperus]] 07:26, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----------------------------&lt;br /&gt;
Guys, we need to get moving here.. The contribution section still needs a lot. We need to talk about their innovations and the things they did there:&lt;br /&gt;
CPU virtualization, Memory virtualization, I/O virtualization and the Macro-optimizations.&lt;br /&gt;
&lt;br /&gt;
I will be posting something regarding this in the next few hours. --[[User:Hesperus|Hesperus]] 22:53, 1 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I have looked over the paper again and I am wondering about some things.  How are we to critique it?  By their methods, or by the paper itself?&lt;br /&gt;
I find that in the organization of the paper, they give you the links and extra information to look more in depth on such things like the VMC technology, but they almost use that as an excuse for not explaining things in the paper.&lt;br /&gt;
The VMC(0 -&amp;gt;1) annotation that isn&#039;t explained.  I understand what they mean, but it seems that they assume that you already know some things. --JSlonosky 03:03, 2 December 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
The main idea with n = 2  nest virtualization there are three logical translations: L2 to Virtual to physical address, from an L2 physical to  L1 physical and form a L1 physical to L0 physical address. 3 levels of translations  however there is only 2 MMU page table in the Hardware that called EPT; which takes virtual to physical and guest physical to host physical.  They compress the 3 translations onto the two tables going from the being to end in 2 hopes instead of 3. This is done by shadow page table for the virtual machine and shadow-on-EPT. The Shadow-on-EPT compress three logical translations to two pages. The EPT tables rarely changer were the guest page table changes frequently. L0 emulates EPT for L1 and it uses EPT0-&amp;gt;1 and EPT1-&amp;gt;2 to construct EPT0-&amp;gt;2. this process results in less Exits.&lt;br /&gt;
&lt;br /&gt;
How does I/O virtualization work:&lt;br /&gt;
There are 3 fundamental way to virtual machine access the I/O, Device emulation(sugerman01), Para-virtualized drivers which know it on a driver(Barham03, Russell08) and Direct device assignment( evasseur04,Yassour08) which results in the best performance. to get the best performance they used a IOMMU for safe DMA bypass. With nested 3X3 options for I/O virtualization they had the many options but  they used multi-level device assignment giving L2 guest direct access to L0 devices bypassing both L0 and L1. To do this they had to memory map I/O with program I/0 with DMA with interrupts.  the idea with DMA is that each of hiperyzisor L0,L1 need to used a IOMMU to allow its virtual machine to access the device to  bypass safety. There is only one plate for IOMMU so L0 need to emulates an IOMMU. then L0 compress the Multiple IOMMU into a single hardware IOMMU page table so that L2 programs the device directly. the Device DMA&#039;s are stored into L2 memory space directly&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How they implement the Micro-Optimizations to make it go faster:&lt;br /&gt;
The two main places where guest of a nested hypervisor is slower then the same guest running on a baremetal hypervisor are the second transition between L1 to L2 and the exit handling code running on the L1 hypervirsor. Since L1 and L2 are assumed to be unmodified required charges were found in L0 only.&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5727</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5727"/>
		<updated>2010-11-30T13:41:46Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* General discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem: Michael Bingham&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Hey dudes. I have posted the first part of the backgrounds concept here in the discussion and on the main page as well. This is just a rough version, so I will be constantly expanding it and adding resources later on today. I have also created and added a diagram for illustration, as far as I know, we should be allowed to do this. If anyone have any suggestions to what I have posted or any counter arguments, please discuss. I will also be moving some of the stuff I wrote here (the theory section) to the main page as well.&lt;br /&gt;
&lt;br /&gt;
Regarding the critique, I guess the excessive amount of exits can somehow be seen as a &#039;&#039;&#039;scalability&#039;&#039;&#039; constraint, maybe making the overall design somehow too complex or difficult to get a hold of, I&#039;m not sure about this, but just guessing from a general programming point of view. I will email the prof today, maybe he can give us some hints for what can be considered a weakness or a bad spot if you will in the paper. &lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing the sixth member of the group: Shawn Hansen. --[[User:Hesperus|Hesperus]] 06:57, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Hey guys. I can start working on the research problem part of the essay. I&#039;ll put it up here when I have a rough version than move it to the actual article. As for the critique section, how about we put a section on the talk page here and people can add in what they thought worked/didn&#039;t work with some explanation/references, and then we can get someone/some people to combine it and put it in the essay? &lt;br /&gt;
--[[User:Mbingham|Mbingham]] 18:13, 29 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------------&lt;br /&gt;
&lt;br /&gt;
Nice man.  Sorry I haven&#039;t updated with anything that I have done yet, but I&#039;ll have  it up later today or tomorrow.  I got both an Essay and game dev project done for tomorrow, so after 1 I will be free to work on this until it is time for 3004--JSlonosky 13:41, 30 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Virtualization===&lt;br /&gt;
&lt;br /&gt;
In essence, virtualization is creating an emulation of the underlying hardware for a guest operating system, program or a process to operate on. [1] Usually referred to as virtual machine, this emulation which includes a guest hypervisor and a virtualized environment, only gives an illusion to the guest virtual machine to make it think that its running directly on the main hardware. In other words, we can view this virtual machine as an application running on the host OS.&lt;br /&gt;
 &lt;br /&gt;
The term virtualization has become rather broad, associated with a number of areas where this technology is used like data virtualization, storage virtualization, mobile virtualization and network virtualization. For the purposes and context of our assigned paper, we shall focus our attention on hardware virtualization within operating systems environments.&lt;br /&gt;
&lt;br /&gt;
====Hypervisor==== &lt;br /&gt;
Also referred to as VMM (Virtual machine monitor), is a software module that exists one level above the supervisor and runs directly on the bare hardware to monitor the execution and behaviour of the guest virtual machines. The main task of the hypervior is to provide an emulation of the underlying hardware (CPU, memory, I/O, drivers, etc.) to the guest virtual machines and to take care of the possible issues that may rise due to the interaction of those guest virtual machines among one another, and the interaction with the host hardware and operating system. It also controls host resources.&lt;br /&gt;
&lt;br /&gt;
====Nested virtualization====&lt;br /&gt;
Nested virtualization is the concept of recursively running one or more virtual machines inside one another. For instance, the main operating system (L1) runs a VM called L2, in turn, L2 runs another VM L3, L3 then runs L4 and so on.&lt;br /&gt;
&lt;br /&gt;
====Para-virtualization====&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Trap and emulate model===&lt;br /&gt;
A vitualization model based on the idea that when a guest hypervisor attempts to execute, gain or access privilged hardware context, it triggers a trap or a fault which gets handled or caught by the host hypervisor. The host hypervisor then determines whether this instruction should be allowed to execute or not. Then based on that, the host hypervisor provides an emulation of the requested outcome to the guest hypervisor. The x86 systems discussed in the Turtles Project research paper follows this model.&lt;br /&gt;
&lt;br /&gt;
===The uses of nested virtualization===&lt;br /&gt;
&lt;br /&gt;
====Compatibility====&lt;br /&gt;
A system could provide the user with a compatibility mode for other operatng systems or applications. An example of this would&lt;br /&gt;
be the Windows XP mode thats available in Windows 7, where Windows 7 runs Windows XP as a virtual machine.&lt;br /&gt;
&lt;br /&gt;
====Cloud computing====&lt;br /&gt;
A cloud provider, more fomally referred to as Infrastructure-as-a-Service (IAAS) provider, could use nested virtualization to give the ability to customers to host their own preferred user-controlled hypervisors and run their virtual machines on the provider hardware. This way both sides can benefit, the provider can attract customers and the customer can have freedom implementing its system on the host hardware without worrying about compatibility issues.&lt;br /&gt;
&lt;br /&gt;
The most well known example of an IAAS provider is Amazon Web Services (AWS). AWS presents a virtualized platform for other services and web sites such as NetFlix to host their API and database on Amazon&#039;s hardware.&lt;br /&gt;
&lt;br /&gt;
====Security==== &lt;br /&gt;
[Coming...]&lt;br /&gt;
&lt;br /&gt;
====Migration/Transfer of VMs====&lt;br /&gt;
Nested virtualization can also be used in live migration or transfer of virtual machines in cases of upgrade or disaster &lt;br /&gt;
recovery. Consider a scenarion where a number of virtual machines must be moved to a new hardware server for upgrade, instead of having to move each VM sepertaely, we can nest those virtual machines and their hypervisors to create one nested entity thats easier to deal with and more manageable.&lt;br /&gt;
In the last couple of years, virtualization packages such as VMWare and VirtualBox have adapted this notion of live migration and developed their own embedded migration/transfer agents.&lt;br /&gt;
&lt;br /&gt;
====Testing====&lt;br /&gt;
Using virtual machines is convenient for testing, evaluation and bechmarking purposes. Since a virtual machine is essentially&lt;br /&gt;
a file on the host operating system, if corrupted or damaged, it can easily be removed, recreated or even restored since we can&lt;br /&gt;
can create a snapshot of the running virtual machine.&lt;br /&gt;
&lt;br /&gt;
===Protection rings===&lt;br /&gt;
[Coming....]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware, but apparently they were able to do it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the host hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
The turtle project has four components that is crucial to its implementation.&lt;br /&gt;
* Nested VMX virtualization for nested CPU virtualization&lt;br /&gt;
* Multi-dimensional paging for nested MMU virtualization&lt;br /&gt;
* Multi-level device assignment for nested I/O virtualization&lt;br /&gt;
* Micro-Optimizations to make it go faster&lt;br /&gt;
&lt;br /&gt;
How does the Nest VMX virtualization work:&lt;br /&gt;
L0(the lowest most hypervisor) runs L1 with VMCS0-&amp;gt;1(virtual machine control structure).The VMCS is the fundamental data structure that hypervisor per pars, describing the virtual machine, which is passed along to the CPU to be executed. L1(also a hypervisor) prepares VMCS1-&amp;gt;2 to run its own virtual machine which executes vmlaunch. vmlaunch will trap and L0 will have the handle the tape  because L1 is running as a virtual machine do to the fact that L0 is using the architectural mod for a hypervisor. So in order to have multiplexing happen by making L2 run as a virtual machine of L1. So L0 merges VMCS&#039;s; VMCS0-&amp;gt;1 merges with VMCS1-&amp;gt;2 to become VMCS0-&amp;gt;2(enabling L0 to run L2 directly). L0 will now launch a L2 which cause it to trap. L0 handles the trap itself or will forward it to L1 depending if it L1 virtual machines responsibility to handle.&lt;br /&gt;
The way it handles a single L2 exit, L1 need to read and write to the VMCS disable interrupts which wouldn&#039;t normally be a problem but because it running in guest mode as a virtual machine all the operation trap leading to a signal high level L2 exit or L3 exit causes many exits(more exits less performance). Problem was corrected by making the single exit fast and reduced frequency of exits with multi-dimensional paging. In the end L1 or L0 base on the trap will finish handling it and resumes L2. this Process is repeated over again contentiously.&lt;br /&gt;
&lt;br /&gt;
How Multi-dimensional paging work:&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
Two benchmarks were used: kernbench - compiles the linux kernel multiple times. SPECjbb - designed to measure server side [perofmance for Java run-time environments&lt;br /&gt;
&lt;br /&gt;
Overhead for nested virtualization with kernbench is 10.3% and 6.3% for Specjbb. &lt;br /&gt;
There are two sources of overhead evident in nested virtualization. First, the transitions between L1 and L2 are slower than the transition in the lower level of the nested design (between L0 and L1). Second the code handling EXITs running on the host hypervisor such as L1 is much slower than the same code in L0.&lt;br /&gt;
&lt;br /&gt;
The paper outlines optimization steps to achieve the minimal overhead.&lt;br /&gt;
&lt;br /&gt;
1. Bypassing vmread and vmwrite instructions and directly accessing data under certain conditions. Removing the need to trap and emulate.&lt;br /&gt;
&lt;br /&gt;
2. Optimizing exit handling code. (the main cause of the slowdown is provided by additional exits in the exit handling code.&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5548</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5548"/>
		<updated>2010-11-24T23:43:55Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* General discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=Group work=&lt;br /&gt;
* Background concepts: Munther Hussain&lt;br /&gt;
* Research problem:&lt;br /&gt;
* Contribution:&lt;br /&gt;
* Critique:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;br /&gt;
----------&lt;br /&gt;
The section on the related work has all the things we need to as far as other papers go. Also, I was able to find other research papers that are not mentioned in the paper. I will definitely be adding those paper by tonight. For the time being, I will handle the background concepts. I added a group work section below to keep track of whos doing what. I should get the background concept done hopefully by tonight.  If anyone want to help with the other sections that would be great, please add your name to the section you want to handle below.&lt;br /&gt;
&lt;br /&gt;
I added a general paper summary below just to illustrate the general idea behind each section. If anybody wants to add anything, feel free to do so. --[[User:Hesperus|Hesperus]] 18:55, 22 November 2010 (UTC)&lt;br /&gt;
-----------&lt;br /&gt;
I remember the prof mentioned the most important part of the paper is the Critique so we gotta focus on that altogether not just one person for sure.--[[User:Praubic|Praubic]] 19:22, 22 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-------------&lt;br /&gt;
Yeah absloutely, I agree. But first, lets pin down the crucial points. And then we can discuss them collectively. If anyone happens to come across what he thinks is good or bad, then you can add it below to the good/bad points. Maybe the group work idea is bad, but I just thought maybe if we each member focuses on a specific part in the beginning, we can maybe have a better overall idea of what the paper is about. --[[User:Hesperus|Hesperus]] 19:42, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Ok, another thing I figured is that the paper doesn&#039;t directly hint at why nested virtualization is necessary? I posted a link in references and I&#039;l try to research more into the purpose of nested virtualization.--[[User:Praubic|Praubic]] 19:45, 22 November 2010 (UTC)&lt;br /&gt;
--------------&lt;br /&gt;
Actually the paper does talk about that. Look at the first two paragraphs in the introduction section of the paper on page 1. But you&#039;re right, they don&#039;t really elaborate, I think its because its not the purpose or the aim of the paper in the first place. --[[User:Hesperus|Hesperus]] 20:31, 22 November 2010 (UTC) &lt;br /&gt;
--------------&lt;br /&gt;
The stuff that Michael provided are excellent. That was actually what I was planning on doing. I will start by defining virtualization, hypervisors, computer ring security, the need and uses of nested virtualization, the models, etc. --[[User:Hesperus|Hesperus]] 22:14, 22 November 2010 (UTC)&lt;br /&gt;
-------------&lt;br /&gt;
So here my question  who doing what in the group work and where should I focus my attention to do my part?- Csulliva&lt;br /&gt;
-------------&lt;br /&gt;
I have posted few things regarding the background concepts on the main page. I will go back and edit it today and talk about other things like: nested virtualization, the need and advantages of NV, the models, the trap and emulate model of x86 machines, computer paging which is discussed in the paper, computer ring security which again they touch on at some point in the paper. I can easily move some of the things I wrote in the theory section to the main page, but I want to consult the prof first on some of those things.&lt;br /&gt;
&lt;br /&gt;
One thing that I&#039;m still unsure of is how far should we go here ? should we provide background on the hardware architecture used by the authors like the x86 family and the VMX chips, or maybe some of the concepts discussed later on in the testing such as optimization, emulation, para-virtualization ?&lt;br /&gt;
&lt;br /&gt;
I will speak and consult the prof today after our lecture. If other members want to help, you guys can start with the related work and see how the content of the paper compares to previous or even current research papers. --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
------------------------&lt;br /&gt;
In response to what Michael mentioned above in the background section: we should definitely talk about that, from what I understood, they apply the same model (the trap and emulate) but they provide optimizations and ways to increase the trap calls efficiency between the nested environments, so thats definitely a contribution, but its more of a performance optimization kind of contribution I guess, which is why I mentioned the optimizations in the contribution section below.  --[[User:Hesperus|Hesperus]] 08:08, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
&#039;&#039;&#039;Ok, so for those who didn&#039;t attend today&#039;s lecture, the prof was nice enough to give us an extension for the paper, the due date now is Dec 2nd.&#039;&#039;&#039; And thats really good, given that some of those concepts require time to sort of formulate. I also asked the prof on the approach that we should follow in terms of presenting the material, and he mentioned that you need to provide enough information for each section to make your follow student understand what the paper is about without them having to actually read the paper or go through it in detail. He also mentioned the need to distill some of the details, if the paper spends a whole page explaining multi-dimensional paging, we should probably explain that in 2 small paragraphs or something.&lt;br /&gt;
&lt;br /&gt;
Also, we should always cite resources. If the resource is a book, we should cite the page number as well. --[[User:Hesperus|Hesperus]] 15:16, 23 November 2010 (UTC)&lt;br /&gt;
---------------------------&lt;br /&gt;
Yeah I am really thankful he left us with another week to do it.  I am sure we all have at least 3 projects due soon, other than this Essay.  I&#039;ll type up the stuff that I had highlighted for Tuesday as a break tomorrow.  I was going to do it yesterday but he gave us an extension, so I slacked off a bit.  I also forgot :/ --JSlonosky 23:43, 24 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
=Paper summary=&lt;br /&gt;
==Background Concepts and Other Stuff==&lt;br /&gt;
&lt;br /&gt;
EDIT: Just noticed that someone has put their name down to do the background concept stuff, so Munther feel free to use this as a starting point if you like.&lt;br /&gt;
&lt;br /&gt;
The above looks good. I thought id maybe start touching on some of the sections, so let me know what you guys think. Heres what I think would be useful to go over in the Background Concepts section:&lt;br /&gt;
&lt;br /&gt;
* Firstly, nested virtualization. Why we use nested virtualization (paper gives example of XP inside win 7). Maybe going over the trap and emulate model of nested virtualization.&lt;br /&gt;
* Some of the terminology of nested virtualization. The difference between guest/host hypervisors (we&#039;re already familiar with guest/host OSs), the terminology of L0, ..., Ln with L0 being the bottom hypervisor, etc&lt;br /&gt;
* x86 nested virtualization limitations. Single level architecture, guest/host mode, VMX instructions and how to emulate them. Some of this is in section 3.2of the paper.&lt;br /&gt;
&lt;br /&gt;
Again, anything else you guys think we should add would be great.&lt;br /&gt;
&lt;br /&gt;
Commenting some more on the above summary, under the &amp;quot;main contributions&amp;quot; part, do you think we should count the nested VMX virtualization part as a contribution? If we have multiplexing memory and multiplexing I/O as a main contribution, it would seem to make sense to have multiplexing the CPU as well, especially within the limitations of the x86 architecture. Unless they are using someone else&#039;s technique for virtualizing these instructions.--[[User:Mbingham|Mbingham]] 21:16, 22 November 2010 (UTC)&lt;br /&gt;
==Research problem==&lt;br /&gt;
The paper provides a solution for Nested-virtualization on x86 based computers. Their approach is software-based, meaning that, they&#039;re not really altering the underlying architecture, and this is basically the most interesting thing about the paper, since x86 computers don&#039;t support nested-virtualization in terms of hardware. But apparently they were able to do it.&lt;br /&gt;
In addition, generally, nested virtualization is not supported on x86 systems (the architecture is not designed with that in mind) but for example vista runs XP VM under the covers when running xp programs which shows ability for parallel virtualization on a single hypervisor.&lt;br /&gt;
&lt;br /&gt;
The goal of nested virtualization and multiple host hypervisors comes down to efficiency. Example: Virtualization on servers has been rapidly gaining popularity. The next evolution step is to extend a single level of memory management virtualization support to handle nested virtualization, which is critical for &#039;&#039;high performance&#039;&#039;. [1]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How does the concept apply to the quickly developing cloud computing?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Cloud user manages his own virtual machine directly through a hypervisor of choice. In addition it provides increased security by hypervicsor-level intrusion detection.&lt;br /&gt;
&lt;br /&gt;
==Related work==&lt;br /&gt;
&lt;br /&gt;
Comparisons with other related/similar research and work:&lt;br /&gt;
&lt;br /&gt;
Refer to the following website and to the related work section in the paper regarding this section: &lt;br /&gt;
http://www.spinics.net/lists/kvm/msg43940.html&lt;br /&gt;
&lt;br /&gt;
[This is a forum post by one of the authors of our assigned paper where he talks about more recent research work on virtualization, particularly in his first paragraph, he refers to some more recent research by the VMWare technical support team. He also talks about some of the research papers referred to in our assigned paper.] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Theory (Section 3.1)== &lt;br /&gt;
&lt;br /&gt;
Apparently, theres 2 models to applying nested-virtualization:&lt;br /&gt;
&lt;br /&gt;
* Multiple-level architecture support: where every hypervisor handles every other hypervisor running on top of it. For instance, if L0 (host hypervisor) runs L1. If L1 attempts to run L2, then the trap handling and the work needed to be done to allow L1 to instantiate a new VM is handled by L0. More generally, if L2 attempts to created its own VM, then L1 will handle the trap handling and such.&lt;br /&gt;
&lt;br /&gt;
* Single-level architecture support: This is the model supported by the x86 machines. This model is tied into the concept of &amp;quot;Trap and emulate&amp;quot;, where every hypervisor tries to emulate the underlying hardware (the VMX chip in the paper implementation) and presents a fake ground for the hypervisor running on top of it (the guest hypervisor) to operate on, letting it think that he&#039;s running on the actual hardware. The idea here is that in order for a guest hypervisor to operate and gain hardware-level privileges, it evokes a fault or a trap, this trap or fault is then handled or caught by the main host hypervisor and then inspected to see if its a legitimate or appropriate command or request, if it is, the host gives privilige to the guest, again having it think that its actually running on the main bare-metal hardware.&lt;br /&gt;
&lt;br /&gt;
In this model, everything must go back to the main host hypervisor. Then the hosy hypervisor forwards the trap and virtualization specification to the above-level involved or responsible. For instance, if L0 runs L1. Then L1 attempts to run L2. Then the command to run L2 goes down to L0 and then L0 forwards this command to L1 again. This is the model we&#039;re interested in because this what x86 machines basically follow. Look at figure 1 in the paper for a better understanding of this.&lt;br /&gt;
&lt;br /&gt;
==Main contribution==&lt;br /&gt;
The paper propose two new-developed techniques:&lt;br /&gt;
* Multi-dimensional paging (for memory virtualization)&lt;br /&gt;
* Multiple-level device management (for I/O virtualization)&lt;br /&gt;
&lt;br /&gt;
Other contributions:&lt;br /&gt;
* Micro-optimizations to improve performance.&lt;br /&gt;
&lt;br /&gt;
==Implementation==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Performance==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Critique==&lt;br /&gt;
&#039;&#039;&#039;The good:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* From what I read so far, the research showed in the paper is probably the first to achieve efficent x86 nested-virtualization without altering the hardware, relying on software-only techniques and mechanisms. They also won the Jay Lepreau best paper award.&lt;br /&gt;
&lt;br /&gt;
* security - being able to run other hypervisors without being detected&lt;br /&gt;
&lt;br /&gt;
* testing, debugging - of hypervisors&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The bad:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* lots of exits. to be continued. (anyone whos is interested feel free to take this topic)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1] http://www.haifux.org/lectures/225/ - &#039;&#039;&#039;Nested x86 Virtualization - Muli Ben-Yehuda&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5332</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5332"/>
		<updated>2010-11-22T14:42:01Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* General discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
Hey dudes. I think we need to get going here.. the paper is due in 4 days. I just did the paper intro section (provided the title, authors, research labs, links, etc.). I have read the paper twice so far and will be spending the whole day working on the background concepts and the research problem sections. &lt;br /&gt;
&lt;br /&gt;
I&#039;m still not sure on how we should divide the work and sections among the members, especially regarding the research contribution and critique, I mean those sections should not be based or written from the perspective of one person, we all need to work and discuss those paper concepts together.&lt;br /&gt;
&lt;br /&gt;
If anyone wants to add something, then please add but don&#039;t edit or alter the already existing content. Lets try to get as many thoughts/ideas as possible and then we will edit and filter the redundancy later. And lets make sure that we add summary comments to our edits to make it easier to keep track of everything.&lt;br /&gt;
&lt;br /&gt;
Also, we&#039;re still missing one member: Shawn Hansen. Its weird because on last Wednesday&#039;s lab, the prof told me that he attended the lab and signed his name, so he should still be in the course. --[[User:Hesperus|Hesperus]] 18:07, 21 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah man. We really do need to get on this. Not going to ozzy so I got free time now. I am reading it again to refresh my memory of it and will put notes of what I think we can criticize about it and such. What kind of references do you think we will need?  Similar papers etc?&lt;br /&gt;
If you need to a hold of me. Best way is through email. jslonosk@connect.Carleton.ca.  And if that is still in our group but doesn&#039;t participate,  too bad for him--JSlonosky 14:42, 22 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5061</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5061"/>
		<updated>2010-11-16T16:37:53Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* General discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Why waste your money on that old man ? I&#039;d love to see Halford though, I&#039;m sure he&#039;ll do some classic Priest material, haven&#039;t checked the new record yet, but the cover looks awful, definitely the worst and most ridiculous cover of the year. Anyways, enough music talk. I think we should get it done at least on 24th, we should leave the last day to do the editing and stuff. I removed Smcilroy from the members list, I think he checked in here by mistake because I can see him in group 7. So far, we&#039;re 5, still missing one member. --[[User:Hesperus|Hesperus]] 05:36, 16 November 2010 (UTC)&lt;br /&gt;
-----&lt;br /&gt;
Yeah that would be pretty sweet.  I figured I might as well see him when I can; Since he is going to be dead soon.  How is he not already?  Alright well, the other member should show up soon, or I&#039;d guess that we are a group of 5. --JSlonosky 16:37, 16 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5033</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5033"/>
		<updated>2010-11-16T04:09:01Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* General discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Smcilroy&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. Not that I would let that happen --JSlonosky 02:51, 16 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5032</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=5032"/>
		<updated>2010-11-16T02:51:10Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* General discussion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Group members=&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
* Michael Bingham&lt;br /&gt;
* Smcilroy&lt;br /&gt;
* Chris Sullivan&lt;br /&gt;
* Pawel Raubic&lt;br /&gt;
&lt;br /&gt;
=General discussion=&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Pawel has already contacted us so he still in for the course, that makes 3 of us. The other three members, please drop in and add your name. We need to confirm the members today by 1:00 pm. --[[User:Hesperus|Hesperus]] 12:18, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
----------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Mbingham|Mbingham]] 15:08, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
&lt;br /&gt;
Checked in --[[User:Smcilroy|Smcilroy]] 17:03, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
To the person above me (Smcilroy): I can see that you&#039;re assigned to group 7 and not this one. So did the prof move you to this group or something ? We haven&#039;t confirmed or emailed the prof yet, I will wait until 1:00 pm. --[[User:Hesperus|Hesperus]] 17:22, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
---------------------&lt;br /&gt;
Alright, so I just emailed the prof the list of members that have checked in so far (the names listed above plus Pawel Raubic),&lt;br /&gt;
Smcilroy: I still don&#039;t know whether you&#039;re in this group or not, though I don&#039;t see your name listed in the group assignments on the course webpage. To the other members: if you&#039;re still interested in doing the course, please drop in here and add your name or even email me, you can find my contact info in my profile page(just click my signature).&lt;br /&gt;
&lt;br /&gt;
Personally speaking, I find the topic of this article (The Turtle Project) to be quite interesting and approachable, in fact we&#039;ve&lt;br /&gt;
already been playing with VirtualBox and VMWare and such things, so we should be familiar with some of the concepts the article&lt;br /&gt;
approaches like nested-virtualization, hypervisors, supervisors, etc, things that we even covered in class and we can in fact test on our machines. I&#039;ve already started reading the article, hopefully tonight we&#039;ll start posting some basic ideas or concepts and talk about the article in general. I will be in tomorrow&#039;s tutorial session in the 4th floor in case some of you guys want to get to know one another. --[[User:Hesperus|Hesperus]] 18:43, 15 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Yeah, it looks  pretty good to me.  Unfortunately, I am attending Ozzy Osbourne on the 25th, so I&#039;d like it if we could get ourselves organized early so I can get my part done and not letting it fall on you guys. --JSlonosky 02:51, 16 November 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=4959</id>
		<title>Talk:COMP 3000 Essay 2 2010 Question 9</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_2_2010_Question_9&amp;diff=4959"/>
		<updated>2010-11-15T01:52:00Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Group members:&lt;br /&gt;
&lt;br /&gt;
* Munther Hussain&lt;br /&gt;
* Jonathon Slonosky&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
---------------&lt;br /&gt;
&lt;br /&gt;
Hey there, this is Munther. The prof said that we should be contacting each other to see whos still on board for the course. So please&lt;br /&gt;
if you read this, add your name to the list of members above. You can my find my contact info in my profile page by clicking my signature. We shall talk about the details and how we will approach this in the next few days --[[User:Hesperus|Hesperus]] 16:41, 12 November 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Checked in -- JSlonosky&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=4368</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=4368"/>
		<updated>2010-10-15T02:50:10Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* The Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
==== ====&lt;br /&gt;
(Not complete but most of article 9)&lt;br /&gt;
Classical Virtualization&lt;br /&gt;
* VMMs allow programs in virtual environments to run natively other than resource usage&lt;br /&gt;
** Dominant instructions executed directly on cpu&lt;br /&gt;
** vmm completely controls system resources&lt;br /&gt;
** often need to emulate every native instruction which would severely effect the performance&lt;br /&gt;
** sensitive instruction that violate safety and encapsulation&lt;br /&gt;
** vmm handles them as priviledged instructions&lt;br /&gt;
&lt;br /&gt;
x86 Virtualization&lt;br /&gt;
* virtualization in personal work stations rather than mainframes&lt;br /&gt;
** rings that allow isolation between virtual machines&lt;br /&gt;
** most privileged in ring 0 and least in ring 3. The operating system runs in ring 0 and user apps in ring 3&lt;br /&gt;
*** vmm in ring 0 and vms in lesser privilege rings (1 or 3)&lt;br /&gt;
*** guestOS believes its in ring 0&lt;br /&gt;
* address space compression, where to run the VMM&lt;br /&gt;
** if run using guest address space, guest can find out its virtualized or compromise the isolation&lt;br /&gt;
* does not trap all sensitive instructions but can handle them, violates classical virtualization description&lt;br /&gt;
* some privileged access fail without faulting&lt;br /&gt;
* interrupt virtualization - VMM handles AND guestOS handles&lt;br /&gt;
* binary translation - improve performance&lt;br /&gt;
* rewriting instructions and trapping before problems arrise&lt;br /&gt;
&lt;br /&gt;
Paravirtualization&lt;br /&gt;
* guestOS become exposed to vm information so that the guest is aware that it is virtualized and can make decisions based on this&lt;br /&gt;
* allows to avoid problem instructions&lt;br /&gt;
* Xen&lt;br /&gt;
* guestOS must be modified and is not natively running&lt;br /&gt;
**works with the hostOS to run efficiently&lt;br /&gt;
&lt;br /&gt;
VMM types&lt;br /&gt;
* hostedVMM - executes in hostOS and uses the drivers and support of the OS&lt;br /&gt;
* Stand-aloneVMM - runs directly on hardware and uses it&#039;s own drivers and services&lt;br /&gt;
* hybridVMM - runs a serviceOS where requests to hardware go through (I/O)&lt;br /&gt;
&lt;br /&gt;
Device Emulation&lt;br /&gt;
* implement real hardware in software&lt;br /&gt;
* completely virtual device that the guest interacts with&lt;br /&gt;
* mapped to physical hardware that handles the interactions but the emulation allows conversion&lt;br /&gt;
* allows the vm to be easily migrated between machines as it does not rely on the physical hardware&lt;br /&gt;
* allows having multiple vms and simplifies sharing (multiplexing)&lt;br /&gt;
* poor performance as the vmm needs to do a lot to virtulize the machine&lt;br /&gt;
&lt;br /&gt;
Paravirtualization&lt;br /&gt;
* modified guestOS to cooperate with VMM &lt;br /&gt;
* VMM does not have to do everything to handle device drivers&lt;br /&gt;
* not everything can be paravirtualized&lt;br /&gt;
* proprietary os and device drivers can&#039;t be paravirtualized&lt;br /&gt;
* still allows an increase in performance&lt;br /&gt;
* eventing or callback mechanism&lt;br /&gt;
** guestOS modifies interrupt mechs&lt;br /&gt;
* modifications are not applicable to all guestOS&lt;br /&gt;
&lt;br /&gt;
Dedicated Devices&lt;br /&gt;
* does not virtualize device but assigns directly to guest vm&lt;br /&gt;
* uses guest&#039;s drivers instead of host&lt;br /&gt;
* simplifiest vmm by removing handing of i/o securily&lt;br /&gt;
* limited physical devices that can be dedicated&lt;br /&gt;
* dificult to migrate vm as it depends on the pairing with this resource&lt;br /&gt;
* elims over-head of virtualization and simplicity in vmm&lt;br /&gt;
* direct memory access not supported&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[8]&amp;lt;nowiki&amp;gt;Microkernel verses monolithic kernel&lt;br /&gt;
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf  - Roch&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I will site it/reference it better later&lt;br /&gt;
&lt;br /&gt;
[9]Fisher-Ogden J. 2006. Hardware Support for Efficient Virtualization. University of California, San Diego. http://cseweb.ucsd.edu/~jfisherogden/hardwareVirt.pdf&lt;br /&gt;
&lt;br /&gt;
Not completely sure of the citation style used above.&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
Corey Ling&lt;br /&gt;
Steph Lay&lt;br /&gt;
Aaron .L&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
In Computer Science, the kernel is the component at the center of  the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. We are comparing Exokernels to Microkernels and Virtual Machines by looking at how the kernel goes about such management and its connections. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels because as this design shows, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it also adds a layer of complexity within the system. This is less efficient than a real machine as it accesses the hardware indirectly. It can be observed by examining how the exokernel provides low level hardware access and provides custom abstraction to those devices. This is done in order to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extent that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system. An operating system could not function without the kernel.  &lt;br /&gt;
&lt;br /&gt;
A kernel is the lowest level section of an operating system.   Within a system, it has the most privileges.  It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access. This is also where the user can run its applications and libraries.[8]  This leaves the kernel with the need to manage the other necessary processes. For example, the kernel could manage the File Systems and complete process scheduling.  The kernel is layered with the most authoritative process on its lowest level.[8]  A monolithic kernel, which is a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized.  However, this architecture had problems. [8]  If the kernel needed to be updated with more code, or a change in the system, the entire kernel would need to be compiled. Therefore, due to the amount of processes within it, it would take an inefficient amount of time.  Here, a microkernel becomes practical.&lt;br /&gt;
&lt;br /&gt;
The concept of a microkernel, is to reduce  the code within the kernel. The microkernel is only included in the kernel if it would impact the system. There are a variety of ways the system could be affected if a microkernel were to be implemented, for example, there would be increased performance and efficiency. [7] Furthermore, a microkernel is a kernel that has a reduced amount of mandatory software within itself.  This means that it contains less software to manage, and has a reduced size.  &lt;br /&gt;
&lt;br /&gt;
A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes as if the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts. [8] This new structure makes the system much more modular, and easier to provide solutions.  If a driver must be patched or upgraded, the kernel does not need to be recompiled. [7] The old driver can be removed, and during the time the device waits for the system to recognize it, the operating system replaces the driver.  This allows for real-time updating, that can be done while the computer is still functional.  This can reduce the complete crash of the system.  Therefore,if a device fails, the kernel will not crash itself, like a monolithic kernel would.  The microkernel can reload the driver of the device that failed and continue functioning.  [7]  &lt;br /&gt;
&lt;br /&gt;
Want more on the scheduling?  I can do that if wanted. -key note on exokernel&#039;s mutiplexing vs microkernel&#039;s messaging, exo more efficient so perhaps running with the idea that messaging b/w processes not necessarily the ideal way need to also start outlaying weaknesses in the design as well in order to play up the idea that an exokernel just does it better -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -Steph L.&lt;br /&gt;
&lt;br /&gt;
A Virtual Machine, or VM, is a software abstraction of a physical machine. This entails virtualization of the physical machines resources in order to share them among OS run in the VM. Virtualizing these resources allow the OS to run as if it were on a full machine when, in reality, it is actually running in a virtualized environment on top of a hostOS. The OS is actually running on the machine, sharing the resources.&lt;br /&gt;
&lt;br /&gt;
Virtual Machines generally contain two key components; both the Virtual Machine Monitor, or VMM and the VM.&lt;br /&gt;
&lt;br /&gt;
The VMM, also known as the hypervisor, manages the virtualization of the physical resources and the interactions with the VM running on top. [4] In other words, it mediates between the virtualized world and the physical world, keeping them separate and monitoring their interactions with each other. The hypervisor is what allows the VM to operate as if it were on its&#039; own machine. This is done by handling any requests to resources and maintaining these requests with what has actually been provided to the VM, by the hostOS.  The hostOS provides management for the VMM, as well as, allowing physical access to devices, hardware and drivers. [6]&lt;br /&gt;
&lt;br /&gt;
The VM, is what contains the OS, we are running through virtualization. [6] This OS is called the guestOS. It will only be able to access any resources that have been made available to the VM by the hostOS. [6] Otherwise, the guestOS will not know about any other resources and does not have direct access to physical hardware. This will be taken care of by the VMM. While, the guestOS will execute as its own machine, unaware of this mediator.&lt;br /&gt;
&lt;br /&gt;
There are various ways of implementing hardware virtualization in a system to allow VMs to run. This includes device emulation, paravirtualization and dedicated devices. [9]&lt;br /&gt;
&lt;br /&gt;
In device emulation the VMM provides a complete virtualization of a device for the guestOS to interact with, in the software. [9] The VMM will map this virtualized device to the physical resource and handle any interactions between them. This will usually include converting instructions from the guestOS into instructions that are compatible with the device. [9] Device emulation allows for the VM to be migrated easily to another machine as it is not dependent on the physical devices but rather, on the software emulations instead. [9] It also allows for simpler multiplexing between multiple virtual machines as it can handle sharing though these virtualized devices. [9] A drawback of emulation, however, is poor performance. This is because the VMM must handle every request and convert them to be compatible with the physical device. [9] Nonetheless, despite its poor performance, emulation is still the most common form of virtualization.&lt;br /&gt;
&lt;br /&gt;
Paravirtualization allows for a boost in performance by having the guestOS and the hostOS work together to improve performance. [9] In paravirtualization, the guestOS is not a native OS and must be modified so guestOS is aware that it is a virtualized system. [9] Since the guestOS is aware of this, it can now make better decisions about how it accesses devices. Seeing as the guestOS will be able to handle its decisions better, the VMM’s responsibility is reduced as it now does not have to translate between the guestOS and the physical devices. [9] Though the performance boost is a great advantage, there are many disadvantages to this. Some disadvantages are that you can only use paravirtualization if you can implement the modifications to the guestOS. As well, not everything can be paravirtualized and as such, this limits the cases in which this method can be used. [9] Also, every guestOS must be modified in order to be used in paravirtualization. The modifications will differ in various OS and so, there is also the task of implementing these changes to make a guestOS compatible. [9]&lt;br /&gt;
&lt;br /&gt;
Instead of virtualizing the hardware and mediating between the VMM, dedicated devices allow mapping directly to the guestOS. [9] In this method, the device will use the guestOS’s drivers instead of the hostOS’s. [9] Using this method allows the guestOS to use the hardware to its full extent, without having to deal with the VMM. This then, simplifies the VMM by eliminating the overhead by virtualizating the hardware and handling the requests to devices. [9] However, there are limited physical resources to be dedicated to a guestOS. This also makes migration difficult as the guestOS is dependent on the physical device. [9]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;notes&#039;&#039;&#039;&lt;br /&gt;
- it ended up being quite lengthy. I mainly focused on the device virtualization rather than the architecture of a VM (like x86 virtualization). I&#039;ll put up my notes for the paper I found for virtualization. I didn&#039;t talk about Xen or VMware though. If any of that is needed, I can try to continue working on it tonight but I have another priority.&lt;br /&gt;
&lt;br /&gt;
-try focusing on the emulation side of VM where emulation&#039;s weaknesses vs direct hardware access or custom abstraction that exokernels -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise between a microkernel and a VM. It can also be seen as simply dividing a monolithic kernel up into 2 parts. The management tasks of the kernel remain in the exokernel, these are the raw resource management tasks such as memory management. While the higher level abstractions such as file systems, address spaces, and interprocess communication is done at the application level[1]. These abstractions are usually provided by library OSs which allow applications to handle their own machine resources in ways not possible with the traditional kernel. Which in turn can cause large performance boosts in several areas which will be shown below. &lt;br /&gt;
&lt;br /&gt;
The exokernel walks this fine line between management and control by only providing three functions for accessing the machine&#039;s resources. It will only track ownership of resources, ensure protection by guarding all resource usage and bind points and revoke access to the resources. [1] By doing so the exokernel allows the library OSs maximum freedom over the machine&#039;s resources without allowing them to interfere with one another&#039;s resources as you would see in an unmanaged system.&lt;br /&gt;
&lt;br /&gt;
Through these three functions the exokernel can control and allow many different situations. By tracking the ownership of the resources the exokernel can export privileged instructions to the library OS so that traditional OS abstractions can be implemented as well as allowing application based resource management which is the best way to build flexible and efficient systems all while avoiding resource management except when inter-library conflict protection is required to maintain system integrity.[1] By exposing the allocation and the raw resources and their physical names to the application layer the exokernel is able to allow the library to request physical resources which removes the expensive overhead that is involved with translating the virtual names to their physical names [3] also the physical names capture more useful information and are safer and less resource intensive. [3] Finally by exposing revocation the exokernel allows well behaved library OSs to preform application level resource management [1] and allows the library OS to choose what instance of the resource to release[1].&lt;br /&gt;
&lt;br /&gt;
Not only does the exokernel benefit from it&#039;s decreased task load but the library OSs also experience several benefits compared to operating on a microkernel or VM. Library OSs running on an exokernel have a reduction in the number of kernel crossings compared to a microkernel[1]. Since the library OS is not trusted by the exokernel it can then be trusted by the application. While a library OS may choose to handle low level management tasks it self, there is still a notion of portability for applications working with the library OSs. Applications that use a library OS that has implemented standard interfaces, such as POSIX, will be portable on any system with the same interface [1]. A library OS can also be made portable if it is designed to interact with a low-level, machine independent level to hide hardware details. [1]&lt;br /&gt;
&lt;br /&gt;
Exokernels follow the same design pattern of removing unnecessary code from within the kernel but without the same kernel to user space communication issues that microkernels experience. They also provide, through their library OSs, a simple yet effective way to emulate several different types of physical resource handling methods similar to virtual machines.&lt;br /&gt;
&lt;br /&gt;
Paragraph 4 - Contrast/Compromise --[[User:Asoknack|Asoknack]]   -- We may not need this.  Corey did a good job with the Exokernel and incorporated the information and its compromise of the two systems&lt;br /&gt;
&lt;br /&gt;
Conclusion - Jon S.   -  Only a sentence per paragraph, excluding Intro&lt;br /&gt;
&lt;br /&gt;
Sweet.  Looks like we got it covered.  We should read each others parts and put suggestions and edits. One of us should try and change it to one style if there are contradictions. And to put it on the main page.  We can figure that out tomorrow.  - Jon S&lt;br /&gt;
&lt;br /&gt;
Once the other parts are up and you see anything you know of as a good reference to back it up, put the link so we can use it. -Slade&lt;br /&gt;
&lt;br /&gt;
I made some edits to the first two paragraphs. I just reworded some of the unclear sentences and some grammatical errors. I&#039;ll work on editing more of it after comp 3007. Also when all the parts are up i can go through it and link the paragraphs together so it can be read more like an essay  --[[User:Aellebla|Aellebla]] 15:18, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
So far so good, if you find some sentences that are off, go ahead and correct them, just note to us in here that you&#039;ve made changes. Almost done guys! -Slade&lt;br /&gt;
&lt;br /&gt;
Awesome Steph!  Also, Awesome Corey, sounds sweet, looks good-JSlonosky&lt;br /&gt;
&lt;br /&gt;
Made a few more edits to some of the wording. Looks good! --[[User:Aellebla|Aellebla]] 01:52, 15 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- I wrote the conclusion and moved it into the answer space.  Please take a look and make sure it is all formated properly.  Got stuff to do.  Thanks for all the awesome work guys/girls.  Back to 3004! - JSlonosky&lt;br /&gt;
&lt;br /&gt;
==Potential Test Questions==&lt;br /&gt;
&lt;br /&gt;
Add potential test questions here:&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_1&amp;diff=4364</id>
		<title>COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_1&amp;diff=4364"/>
		<updated>2010-10-15T02:48:19Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
To what extent can exokernels be seen as a compromise between virtual machines and microkernels? Explain how the key design characteristics of these three system architectures compare with each other.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
In Computer Science, the kernel is the component at the center of the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. We are comparing Exokernels to Microkernels and Virtual Machines by looking at how the kernel goes about such management and its connections. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels because as this design shows, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it also adds a layer of complexity within the system. This is less efficient than a real machine as it accesses the hardware indirectly. It can be observed by examining how the exokernel provides low level hardware access and provides custom abstraction to those devices. This is done in order to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extent that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
==Microkernels==&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system. An operating system could not function without the kernel.&lt;br /&gt;
A kernel is the lowest level section of an operating system. Within a system, it has the most privileges. It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access. This is also where the user can run its applications and libraries.[8] This leaves the kernel with the need to manage the other necessary processes. For example, the kernel could manage the File Systems and complete process scheduling. The kernel is layered with the most authoritative process on its lowest level.[8] A monolithic kernel, which is a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized. However, this architecture had problems. [8] If the kernel needed to be updated with more code, or a change in the system, the entire kernel would need to be compiled. Therefore, due to the amount of processes within it, it would take an inefficient amount of time. Here, a microkernel becomes practical.&lt;br /&gt;
&lt;br /&gt;
The concept of a microkernel, is to reduce the code within the kernel. The microkernel is only included in the kernel if it would impact the system. There are a variety of ways the system could be affected if a microkernel were to be implemented, for example, there would be increased performance and efficiency. [7] Furthermore, a microkernel is a kernel that has a reduced amount of mandatory software within itself. This means that it contains less software to manage, and has a reduced size.&lt;br /&gt;
&lt;br /&gt;
A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes as if the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts. [8] This new structure makes the system much more modular, and easier to provide solutions. If a driver must be patched or upgraded, the kernel does not need to be recompiled. [7] The old driver can be removed, and during the time the device waits for the system to recognize it, the operating system replaces the driver. This allows for real-time updating, that can be done while the computer is still functional. This can reduce the complete crash of the system. Therefore,if a device fails, the kernel will not crash itself, like a monolithic kernel would. The microkernel can reload the driver of the device that failed and continue functioning. [7]&lt;br /&gt;
&lt;br /&gt;
==Virtual Machines==&lt;br /&gt;
&lt;br /&gt;
A Virtual Machine, or VM, is a software abstraction of a physical machine. This entails virtualization of the physical machines resources in order to share them among OS run in the VM. Virtualizing these resources allow the OS to run as if it were on a full machine when, in reality, it is actually running in a virtualized environment on top of a hostOS. The OS is actually running on the machine, sharing the resources.&lt;br /&gt;
&lt;br /&gt;
Virtual Machines generally contain two key components; both the Virtual Machine Monitor, or VMM and the VM.&lt;br /&gt;
The VMM, also known as the hypervisor, manages the virtualization of the physical resources and the interactions with the VM running on top. [4] In other words, it mediates between the virtualized world and the physical world, keeping them separate and monitoring their interactions with each other. The hypervisor is what allows the VM to operate as if it were on its&#039; own machine. This is done by handling any requests to resources and maintaining these requests with what has actually been provided to the VM, by the hostOS. The hostOS provides management for the VMM, as well as, allowing physical access to devices, hardware and drivers. [6]&lt;br /&gt;
&lt;br /&gt;
The VM, is what contains the OS, we are running through virtualization. [6] This OS is called the guestOS. It will only be able to access any resources that have been made available to the VM by the hostOS. [6] Otherwise, the guestOS will not know about any other resources and does not have direct access to physical hardware. This will be taken care of by the VMM. While, the guestOS will execute as its own machine, unaware of this mediator.&lt;br /&gt;
&lt;br /&gt;
There are various ways of implementing hardware virtualization in a system to allow VMs to run. This includes device emulation, paravirtualization and dedicated devices. [9]&lt;br /&gt;
&lt;br /&gt;
In device emulation the VMM provides a complete virtualization of a device for the guestOS to interact with, in the software. [9] The VMM will map this virtualized device to the physical resource and handle any interactions between them. This will usually include converting instructions from the guestOS into instructions that are compatible with the device. [9] Device emulation allows for the VM to be migrated easily to another machine as it is not dependent on the physical devices but rather, on the software emulations instead. [9] It also allows for simpler multiplexing between multiple virtual machines as it can handle sharing though these virtualized devices. [9] A drawback of emulation, however, is poor performance. This is because the VMM must handle every request and convert them to be compatible with the physical device. [9] Nonetheless, despite its poor performance, emulation is still the most common form of virtualization.&lt;br /&gt;
&lt;br /&gt;
Paravirtualization allows for a boost in performance by having the guestOS and the hostOS work together to improve performance. [9] In paravirtualization, the guestOS is not a native OS and must be modified so guestOS is aware that it is a virtualized system. [9] Since the guestOS is aware of this, it can now make better decisions about how it accesses devices. Seeing as the guestOS will be able to handle its decisions better, the VMM’s responsibility is reduced as it now does not have to translate between the guestOS and the physical devices. [9] Though the performance boost is a great advantage, there are many disadvantages to this. Some disadvantages are that you can only use paravirtualization if you can implement the modifications to the guestOS. As well, not everything can be paravirtualized and as such, this limits the cases in which this method can be used. [9] Also, every guestOS must be modified in order to be used in paravirtualization. The modifications will differ in various OS and so, there is also the task of implementing these changes to make a guestOS compatible. [9]&lt;br /&gt;
&lt;br /&gt;
Instead of virtualizing the hardware and mediating between the VMM, dedicated devices allow mapping directly to the guestOS. [9] In this method, the device will use the guestOS’s drivers instead of the hostOS’s. [9] Using this method allows the guestOS to use the hardware to its full extent, without having to deal with the VMM. This then, simplifies the VMM by eliminating the overhead by virtualizating the hardware and handling the requests to devices. [9] However, there are limited physical resources to be dedicated to a guestOS. This also makes migration difficult as the guestOS is dependent on the physical device. [9]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Exokernels==&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise between a microkernel and a VM. It can also be seen as simply dividing a monolithic kernel up into 2 parts. The management tasks of the kernel remain in the exokernel, these are the raw resource management tasks such as memory management. While the higher level abstractions such as file systems, address spaces, and interprocess communication is done at the application level[1]. These abstractions are usually provided by library OSs which allow applications to handle their own machine resources in ways not possible with the traditional kernel. Which in turn can cause large performance boosts in several areas which will be shown below.&lt;br /&gt;
&lt;br /&gt;
The exokernel walks this fine line between management and control by only providing three functions for accessing the machine&#039;s resources. It will only track ownership of resources, ensure protection by guarding all resource usage and bind points and revoke access to the resources. [1] By doing so the exokernel allows the library OSs maximum freedom over the machine&#039;s resources without allowing them to interfere with one another&#039;s resources as you would see in an unmanaged system.&lt;br /&gt;
&lt;br /&gt;
Through these three functions the exokernel can control and allow many different situations. By tracking the ownership of the resources the exokernel can export privileged instructions to the library OS so that traditional OS abstractions can be implemented as well as allowing application based resource management which is the best way to build flexible and efficient systems all while avoiding resource management except when inter-library conflict protection is required to maintain system integrity.[1] By exposing the allocation and the raw resources and their physical names to the application layer the exokernel is able to allow the library to request physical resources which removes the expensive overhead that is involved with translating the virtual names to their physical names [3] also the physical names capture more useful information and are safer and less resource intensive. [3] Finally by exposing revocation the exokernel allows well behaved library OSs to preform application level resource management [1] and allows the library OS to choose what instance of the resource to release[1].&lt;br /&gt;
&lt;br /&gt;
Not only does the exokernel benefit from it&#039;s decreased task load but the library OSs also experience several benefits compared to operating on a microkernel or VM. Library OSs running on an exokernel have a reduction in the number of kernel crossings compared to a microkernel[1] also since the library OS is not trusted by the exokernel it can then be trusted by the application. While a library OS may choose to handle low level management tasks it self, there is still a notion of portability for applications working with the library OSs such that applications that use a library OS that has implement standard interfaces, such as POSIX, will be portable on any system with the same interface [1] and a library OS can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details. [1]&lt;br /&gt;
&lt;br /&gt;
Exokernels follow the same design pattern of removing unnecessary code from within the kernel but without the same kernel to user space communication issues that microkernels experience. They also provide through their library OSs a simple yet effective way to emulate several different types of physical resource handling methods similar to virtual machines.&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of the operating system.  It manages the processes, and interrupts.  It controls the memory and drivers through process management.  The microkernel is an improved design of the original kernel.  It removed unnecessary processes from the kernel space and moved it into the user space.  This made the kernel smaller, more portable and easier to manage.  It reduced crashes caused by drivers and other problems.  Virtual machines run software that can emulate hardware for an operating system to run on.  This allows an operating system to run within an operating system, while sharing the same physical hardware.  Though, microkernels and virtual machines have their faults.  An Exokernel is seen as a compromise between microkernels and virtual machines.  An exokernel contains a condensed kernel similar to a microkernel, and has better control over libraries than a virtual machine has.  It can talk to the hardware directly, without the difficulty a microkernel has.  Computers are always evolving.  The further implementation of exokernels would make a great addition.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[8]&amp;lt;nowiki&amp;gt; Roch, Benjamin 2004. Microkernel Verses Monolithic Kernel. Vienna University of Technology, Wien, Österreich.&lt;br /&gt;
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf  &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[9]&amp;lt;nowiki&amp;gt;Fisher-Ogden J. 2006. Hardware Support for Efficient Virtualization. University of California, San Diego.&lt;br /&gt;
&lt;br /&gt;
http://cseweb.ucsd.edu/~jfisherogden/hardwareVirt.pdf&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Misc=&lt;br /&gt;
* every thing is moved to discussion page&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_1&amp;diff=4363</id>
		<title>COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_1&amp;diff=4363"/>
		<updated>2010-10-15T02:48:09Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
To what extent can exokernels be seen as a compromise between virtual machines and microkernels? Explain how the key design characteristics of these three system architectures compare with each other.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
In Computer Science, the kernel is the component at the center of the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. We are comparing Exokernels to Microkernels and Virtual Machines by looking at how the kernel goes about such management and its connections. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels because as this design shows, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it also adds a layer of complexity within the system. This is less efficient than a real machine as it accesses the hardware indirectly. It can be observed by examining how the exokernel provides low level hardware access and provides custom abstraction to those devices. This is done in order to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extent that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
==Microkernels==&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system. An operating system could not function without the kernel.&lt;br /&gt;
A kernel is the lowest level section of an operating system. Within a system, it has the most privileges. It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access. This is also where the user can run its applications and libraries.[8] This leaves the kernel with the need to manage the other necessary processes. For example, the kernel could manage the File Systems and complete process scheduling. The kernel is layered with the most authoritative process on its lowest level.[8] A monolithic kernel, which is a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized. However, this architecture had problems. [8] If the kernel needed to be updated with more code, or a change in the system, the entire kernel would need to be compiled. Therefore, due to the amount of processes within it, it would take an inefficient amount of time. Here, a microkernel becomes practical.&lt;br /&gt;
&lt;br /&gt;
The concept of a microkernel, is to reduce the code within the kernel. The microkernel is only included in the kernel if it would impact the system. There are a variety of ways the system could be affected if a microkernel were to be implemented, for example, there would be increased performance and efficiency. [7] Furthermore, a microkernel is a kernel that has a reduced amount of mandatory software within itself. This means that it contains less software to manage, and has a reduced size.&lt;br /&gt;
&lt;br /&gt;
A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes as if the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts. [8] This new structure makes the system much more modular, and easier to provide solutions. If a driver must be patched or upgraded, the kernel does not need to be recompiled. [7] The old driver can be removed, and during the time the device waits for the system to recognize it, the operating system replaces the driver. This allows for real-time updating, that can be done while the computer is still functional. This can reduce the complete crash of the system. Therefore,if a device fails, the kernel will not crash itself, like a monolithic kernel would. The microkernel can reload the driver of the device that failed and continue functioning. [7]&lt;br /&gt;
&lt;br /&gt;
==Virtual Machines==&lt;br /&gt;
&lt;br /&gt;
A Virtual Machine, or VM, is a software abstraction of a physical machine. This entails virtualization of the physical machines resources in order to share them among OS run in the VM. Virtualizing these resources allow the OS to run as if it were on a full machine when, in reality, it is actually running in a virtualized environment on top of a hostOS. The OS is actually running on the machine, sharing the resources.&lt;br /&gt;
&lt;br /&gt;
Virtual Machines generally contain two key components; both the Virtual Machine Monitor, or VMM and the VM.&lt;br /&gt;
The VMM, also known as the hypervisor, manages the virtualization of the physical resources and the interactions with the VM running on top. [4] In other words, it mediates between the virtualized world and the physical world, keeping them separate and monitoring their interactions with each other. The hypervisor is what allows the VM to operate as if it were on its&#039; own machine. This is done by handling any requests to resources and maintaining these requests with what has actually been provided to the VM, by the hostOS. The hostOS provides management for the VMM, as well as, allowing physical access to devices, hardware and drivers. [6]&lt;br /&gt;
&lt;br /&gt;
The VM, is what contains the OS, we are running through virtualization. [6] This OS is called the guestOS. It will only be able to access any resources that have been made available to the VM by the hostOS. [6] Otherwise, the guestOS will not know about any other resources and does not have direct access to physical hardware. This will be taken care of by the VMM. While, the guestOS will execute as its own machine, unaware of this mediator.&lt;br /&gt;
&lt;br /&gt;
There are various ways of implementing hardware virtualization in a system to allow VMs to run. This includes device emulation, paravirtualization and dedicated devices. [9]&lt;br /&gt;
&lt;br /&gt;
In device emulation the VMM provides a complete virtualization of a device for the guestOS to interact with, in the software. [9] The VMM will map this virtualized device to the physical resource and handle any interactions between them. This will usually include converting instructions from the guestOS into instructions that are compatible with the device. [9] Device emulation allows for the VM to be migrated easily to another machine as it is not dependent on the physical devices but rather, on the software emulations instead. [9] It also allows for simpler multiplexing between multiple virtual machines as it can handle sharing though these virtualized devices. [9] A drawback of emulation, however, is poor performance. This is because the VMM must handle every request and convert them to be compatible with the physical device. [9] Nonetheless, despite its poor performance, emulation is still the most common form of virtualization.&lt;br /&gt;
&lt;br /&gt;
Paravirtualization allows for a boost in performance by having the guestOS and the hostOS work together to improve performance. [9] In paravirtualization, the guestOS is not a native OS and must be modified so guestOS is aware that it is a virtualized system. [9] Since the guestOS is aware of this, it can now make better decisions about how it accesses devices. Seeing as the guestOS will be able to handle its decisions better, the VMM’s responsibility is reduced as it now does not have to translate between the guestOS and the physical devices. [9] Though the performance boost is a great advantage, there are many disadvantages to this. Some disadvantages are that you can only use paravirtualization if you can implement the modifications to the guestOS. As well, not everything can be paravirtualized and as such, this limits the cases in which this method can be used. [9] Also, every guestOS must be modified in order to be used in paravirtualization. The modifications will differ in various OS and so, there is also the task of implementing these changes to make a guestOS compatible. [9]&lt;br /&gt;
&lt;br /&gt;
Instead of virtualizing the hardware and mediating between the VMM, dedicated devices allow mapping directly to the guestOS. [9] In this method, the device will use the guestOS’s drivers instead of the hostOS’s. [9] Using this method allows the guestOS to use the hardware to its full extent, without having to deal with the VMM. This then, simplifies the VMM by eliminating the overhead by virtualizating the hardware and handling the requests to devices. [9] However, there are limited physical resources to be dedicated to a guestOS. This also makes migration difficult as the guestOS is dependent on the physical device. [9]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Exokernels==&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise between a microkernel and a VM. It can also be seen as simply dividing a monolithic kernel up into 2 parts. The management tasks of the kernel remain in the exokernel, these are the raw resource management tasks such as memory management. While the higher level abstractions such as file systems, address spaces, and interprocess communication is done at the application level[1]. These abstractions are usually provided by library OSs which allow applications to handle their own machine resources in ways not possible with the traditional kernel. Which in turn can cause large performance boosts in several areas which will be shown below.&lt;br /&gt;
&lt;br /&gt;
The exokernel walks this fine line between management and control by only providing three functions for accessing the machine&#039;s resources. It will only track ownership of resources, ensure protection by guarding all resource usage and bind points and revoke access to the resources. [1] By doing so the exokernel allows the library OSs maximum freedom over the machine&#039;s resources without allowing them to interfere with one another&#039;s resources as you would see in an unmanaged system.&lt;br /&gt;
&lt;br /&gt;
Through these three functions the exokernel can control and allow many different situations. By tracking the ownership of the resources the exokernel can export privileged instructions to the library OS so that traditional OS abstractions can be implemented as well as allowing application based resource management which is the best way to build flexible and efficient systems all while avoiding resource management except when inter-library conflict protection is required to maintain system integrity.[1] By exposing the allocation and the raw resources and their physical names to the application layer the exokernel is able to allow the library to request physical resources which removes the expensive overhead that is involved with translating the virtual names to their physical names [3] also the physical names capture more useful information and are safer and less resource intensive. [3] Finally by exposing revocation the exokernel allows well behaved library OSs to preform application level resource management [1] and allows the library OS to choose what instance of the resource to release[1].&lt;br /&gt;
&lt;br /&gt;
Not only does the exokernel benefit from it&#039;s decreased task load but the library OSs also experience several benefits compared to operating on a microkernel or VM. Library OSs running on an exokernel have a reduction in the number of kernel crossings compared to a microkernel[1] also since the library OS is not trusted by the exokernel it can then be trusted by the application. While a library OS may choose to handle low level management tasks it self, there is still a notion of portability for applications working with the library OSs such that applications that use a library OS that has implement standard interfaces, such as POSIX, will be portable on any system with the same interface [1] and a library OS can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details. [1]&lt;br /&gt;
&lt;br /&gt;
Exokernels follow the same design pattern of removing unnecessary code from within the kernel but without the same kernel to user space communication issues that microkernels experience. They also provide through their library OSs a simple yet effective way to emulate several different types of physical resource handling methods similar to virtual machines.&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of the operating system.  It manages the processes, and interrupts.  It controls the memory and drivers through process management.  The microkernel is an improved design of the original kernel.  It removed unnecessary processes from the kernel space and moved it into the user space.  This made the kernel smaller, more portable and easier to manage.  It reduced crashes caused by drivers and other problems.  Virtual machines run software that can emulate hardware for an operating system to run on.  This allows an operating system to run within an operating system, while sharing the same physical hardware.  Though, microkernels and virtual machines have their faults.  An Exokernel is seen as a compromise between microkernels and virtual machines.  An exokernel contains a condensed kernel similar to a microkernel, and has better control over libraries than a virtual machine has.  It can talk to the hardware directly, without the difficulty a microkernel has.  Computers are always evolving.  The further implementation of exokernels would make a great addition.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[8]&amp;lt;nowiki&amp;gt; Roch, Benjamin 2004. Microkernel Verses Monolithic Kernel. Vienna University of Technology, Wien, Österreich.&lt;br /&gt;
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf  &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[9]&amp;lt;nowiki&amp;gt;Fisher-Ogden J. 2006. Hardware Support for Efficient Virtualization. University of California, San Diego.&lt;br /&gt;
http://cseweb.ucsd.edu/~jfisherogden/hardwareVirt.pdf&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Misc=&lt;br /&gt;
* every thing is moved to discussion page&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_1&amp;diff=4362</id>
		<title>COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_1&amp;diff=4362"/>
		<updated>2010-10-15T02:47:53Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
To what extent can exokernels be seen as a compromise between virtual machines and microkernels? Explain how the key design characteristics of these three system architectures compare with each other.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
In Computer Science, the kernel is the component at the center of the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. We are comparing Exokernels to Microkernels and Virtual Machines by looking at how the kernel goes about such management and its connections. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels because as this design shows, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it also adds a layer of complexity within the system. This is less efficient than a real machine as it accesses the hardware indirectly. It can be observed by examining how the exokernel provides low level hardware access and provides custom abstraction to those devices. This is done in order to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extent that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
==Microkernels==&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system. An operating system could not function without the kernel.&lt;br /&gt;
A kernel is the lowest level section of an operating system. Within a system, it has the most privileges. It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access. This is also where the user can run its applications and libraries.[8] This leaves the kernel with the need to manage the other necessary processes. For example, the kernel could manage the File Systems and complete process scheduling. The kernel is layered with the most authoritative process on its lowest level.[8] A monolithic kernel, which is a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized. However, this architecture had problems. [8] If the kernel needed to be updated with more code, or a change in the system, the entire kernel would need to be compiled. Therefore, due to the amount of processes within it, it would take an inefficient amount of time. Here, a microkernel becomes practical.&lt;br /&gt;
&lt;br /&gt;
The concept of a microkernel, is to reduce the code within the kernel. The microkernel is only included in the kernel if it would impact the system. There are a variety of ways the system could be affected if a microkernel were to be implemented, for example, there would be increased performance and efficiency. [7] Furthermore, a microkernel is a kernel that has a reduced amount of mandatory software within itself. This means that it contains less software to manage, and has a reduced size.&lt;br /&gt;
&lt;br /&gt;
A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes as if the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts. [8] This new structure makes the system much more modular, and easier to provide solutions. If a driver must be patched or upgraded, the kernel does not need to be recompiled. [7] The old driver can be removed, and during the time the device waits for the system to recognize it, the operating system replaces the driver. This allows for real-time updating, that can be done while the computer is still functional. This can reduce the complete crash of the system. Therefore,if a device fails, the kernel will not crash itself, like a monolithic kernel would. The microkernel can reload the driver of the device that failed and continue functioning. [7]&lt;br /&gt;
&lt;br /&gt;
==Virtual Machines==&lt;br /&gt;
&lt;br /&gt;
A Virtual Machine, or VM, is a software abstraction of a physical machine. This entails virtualization of the physical machines resources in order to share them among OS run in the VM. Virtualizing these resources allow the OS to run as if it were on a full machine when, in reality, it is actually running in a virtualized environment on top of a hostOS. The OS is actually running on the machine, sharing the resources.&lt;br /&gt;
&lt;br /&gt;
Virtual Machines generally contain two key components; both the Virtual Machine Monitor, or VMM and the VM.&lt;br /&gt;
The VMM, also known as the hypervisor, manages the virtualization of the physical resources and the interactions with the VM running on top. [4] In other words, it mediates between the virtualized world and the physical world, keeping them separate and monitoring their interactions with each other. The hypervisor is what allows the VM to operate as if it were on its&#039; own machine. This is done by handling any requests to resources and maintaining these requests with what has actually been provided to the VM, by the hostOS. The hostOS provides management for the VMM, as well as, allowing physical access to devices, hardware and drivers. [6]&lt;br /&gt;
&lt;br /&gt;
The VM, is what contains the OS, we are running through virtualization. [6] This OS is called the guestOS. It will only be able to access any resources that have been made available to the VM by the hostOS. [6] Otherwise, the guestOS will not know about any other resources and does not have direct access to physical hardware. This will be taken care of by the VMM. While, the guestOS will execute as its own machine, unaware of this mediator.&lt;br /&gt;
&lt;br /&gt;
There are various ways of implementing hardware virtualization in a system to allow VMs to run. This includes device emulation, paravirtualization and dedicated devices. [9]&lt;br /&gt;
&lt;br /&gt;
In device emulation the VMM provides a complete virtualization of a device for the guestOS to interact with, in the software. [9] The VMM will map this virtualized device to the physical resource and handle any interactions between them. This will usually include converting instructions from the guestOS into instructions that are compatible with the device. [9] Device emulation allows for the VM to be migrated easily to another machine as it is not dependent on the physical devices but rather, on the software emulations instead. [9] It also allows for simpler multiplexing between multiple virtual machines as it can handle sharing though these virtualized devices. [9] A drawback of emulation, however, is poor performance. This is because the VMM must handle every request and convert them to be compatible with the physical device. [9] Nonetheless, despite its poor performance, emulation is still the most common form of virtualization.&lt;br /&gt;
&lt;br /&gt;
Paravirtualization allows for a boost in performance by having the guestOS and the hostOS work together to improve performance. [9] In paravirtualization, the guestOS is not a native OS and must be modified so guestOS is aware that it is a virtualized system. [9] Since the guestOS is aware of this, it can now make better decisions about how it accesses devices. Seeing as the guestOS will be able to handle its decisions better, the VMM’s responsibility is reduced as it now does not have to translate between the guestOS and the physical devices. [9] Though the performance boost is a great advantage, there are many disadvantages to this. Some disadvantages are that you can only use paravirtualization if you can implement the modifications to the guestOS. As well, not everything can be paravirtualized and as such, this limits the cases in which this method can be used. [9] Also, every guestOS must be modified in order to be used in paravirtualization. The modifications will differ in various OS and so, there is also the task of implementing these changes to make a guestOS compatible. [9]&lt;br /&gt;
&lt;br /&gt;
Instead of virtualizing the hardware and mediating between the VMM, dedicated devices allow mapping directly to the guestOS. [9] In this method, the device will use the guestOS’s drivers instead of the hostOS’s. [9] Using this method allows the guestOS to use the hardware to its full extent, without having to deal with the VMM. This then, simplifies the VMM by eliminating the overhead by virtualizating the hardware and handling the requests to devices. [9] However, there are limited physical resources to be dedicated to a guestOS. This also makes migration difficult as the guestOS is dependent on the physical device. [9]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Exokernels==&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise between a microkernel and a VM. It can also be seen as simply dividing a monolithic kernel up into 2 parts. The management tasks of the kernel remain in the exokernel, these are the raw resource management tasks such as memory management. While the higher level abstractions such as file systems, address spaces, and interprocess communication is done at the application level[1]. These abstractions are usually provided by library OSs which allow applications to handle their own machine resources in ways not possible with the traditional kernel. Which in turn can cause large performance boosts in several areas which will be shown below.&lt;br /&gt;
&lt;br /&gt;
The exokernel walks this fine line between management and control by only providing three functions for accessing the machine&#039;s resources. It will only track ownership of resources, ensure protection by guarding all resource usage and bind points and revoke access to the resources. [1] By doing so the exokernel allows the library OSs maximum freedom over the machine&#039;s resources without allowing them to interfere with one another&#039;s resources as you would see in an unmanaged system.&lt;br /&gt;
&lt;br /&gt;
Through these three functions the exokernel can control and allow many different situations. By tracking the ownership of the resources the exokernel can export privileged instructions to the library OS so that traditional OS abstractions can be implemented as well as allowing application based resource management which is the best way to build flexible and efficient systems all while avoiding resource management except when inter-library conflict protection is required to maintain system integrity.[1] By exposing the allocation and the raw resources and their physical names to the application layer the exokernel is able to allow the library to request physical resources which removes the expensive overhead that is involved with translating the virtual names to their physical names [3] also the physical names capture more useful information and are safer and less resource intensive. [3] Finally by exposing revocation the exokernel allows well behaved library OSs to preform application level resource management [1] and allows the library OS to choose what instance of the resource to release[1].&lt;br /&gt;
&lt;br /&gt;
Not only does the exokernel benefit from it&#039;s decreased task load but the library OSs also experience several benefits compared to operating on a microkernel or VM. Library OSs running on an exokernel have a reduction in the number of kernel crossings compared to a microkernel[1] also since the library OS is not trusted by the exokernel it can then be trusted by the application. While a library OS may choose to handle low level management tasks it self, there is still a notion of portability for applications working with the library OSs such that applications that use a library OS that has implement standard interfaces, such as POSIX, will be portable on any system with the same interface [1] and a library OS can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details. [1]&lt;br /&gt;
&lt;br /&gt;
Exokernels follow the same design pattern of removing unnecessary code from within the kernel but without the same kernel to user space communication issues that microkernels experience. They also provide through their library OSs a simple yet effective way to emulate several different types of physical resource handling methods similar to virtual machines.&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of the operating system.  It manages the processes, and interrupts.  It controls the memory and drivers through process management.  The microkernel is an improved design of the original kernel.  It removed unnecessary processes from the kernel space and moved it into the user space.  This made the kernel smaller, more portable and easier to manage.  It reduced crashes caused by drivers and other problems.  Virtual machines run software that can emulate hardware for an operating system to run on.  This allows an operating system to run within an operating system, while sharing the same physical hardware.  Though, microkernels and virtual machines have their faults.  An Exokernel is seen as a compromise between microkernels and virtual machines.  An exokernel contains a condensed kernel similar to a microkernel, and has better control over libraries than a virtual machine has.  It can talk to the hardware directly, without the difficulty a microkernel has.  Computers are always evolving.  The further implementation of exokernels would make a great addition.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[8]&amp;lt;nowiki&amp;gt; Roch, Benjamin 2004. Microkernel Verses Monolithic Kernel. Vienna University of Technology, Wien, Österreich.&lt;br /&gt;
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf  &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[9]&amp;lt;nowiki&amp;gt;Fisher-Ogden J. 2006. Hardware Support for Efficient Virtualization. University of California, San Diego.&lt;br /&gt;
http://cseweb.ucsd.edu/~jfisherogden/hardwareVirt.pdf&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Misc=&lt;br /&gt;
* every thing is moved to discussion page&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_1&amp;diff=4361</id>
		<title>COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_1&amp;diff=4361"/>
		<updated>2010-10-15T02:47:25Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
To what extent can exokernels be seen as a compromise between virtual machines and microkernels? Explain how the key design characteristics of these three system architectures compare with each other.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
In Computer Science, the kernel is the component at the center of the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. We are comparing Exokernels to Microkernels and Virtual Machines by looking at how the kernel goes about such management and its connections. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels because as this design shows, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it also adds a layer of complexity within the system. This is less efficient than a real machine as it accesses the hardware indirectly. It can be observed by examining how the exokernel provides low level hardware access and provides custom abstraction to those devices. This is done in order to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extent that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
==Microkernels==&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system. An operating system could not function without the kernel.&lt;br /&gt;
A kernel is the lowest level section of an operating system. Within a system, it has the most privileges. It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access. This is also where the user can run its applications and libraries.[8] This leaves the kernel with the need to manage the other necessary processes. For example, the kernel could manage the File Systems and complete process scheduling. The kernel is layered with the most authoritative process on its lowest level.[8] A monolithic kernel, which is a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized. However, this architecture had problems. [8] If the kernel needed to be updated with more code, or a change in the system, the entire kernel would need to be compiled. Therefore, due to the amount of processes within it, it would take an inefficient amount of time. Here, a microkernel becomes practical.&lt;br /&gt;
&lt;br /&gt;
The concept of a microkernel, is to reduce the code within the kernel. The microkernel is only included in the kernel if it would impact the system. There are a variety of ways the system could be affected if a microkernel were to be implemented, for example, there would be increased performance and efficiency. [7] Furthermore, a microkernel is a kernel that has a reduced amount of mandatory software within itself. This means that it contains less software to manage, and has a reduced size.&lt;br /&gt;
&lt;br /&gt;
A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes as if the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts. [8] This new structure makes the system much more modular, and easier to provide solutions. If a driver must be patched or upgraded, the kernel does not need to be recompiled. [7] The old driver can be removed, and during the time the device waits for the system to recognize it, the operating system replaces the driver. This allows for real-time updating, that can be done while the computer is still functional. This can reduce the complete crash of the system. Therefore,if a device fails, the kernel will not crash itself, like a monolithic kernel would. The microkernel can reload the driver of the device that failed and continue functioning. [7]&lt;br /&gt;
&lt;br /&gt;
==Virtual Machines==&lt;br /&gt;
&lt;br /&gt;
A Virtual Machine, or VM, is a software abstraction of a physical machine. This entails virtualization of the physical machines resources in order to share them among OS run in the VM. Virtualizing these resources allow the OS to run as if it were on a full machine when, in reality, it is actually running in a virtualized environment on top of a hostOS. The OS is actually running on the machine, sharing the resources.&lt;br /&gt;
&lt;br /&gt;
Virtual Machines generally contain two key components; both the Virtual Machine Monitor, or VMM and the VM.&lt;br /&gt;
The VMM, also known as the hypervisor, manages the virtualization of the physical resources and the interactions with the VM running on top. [4] In other words, it mediates between the virtualized world and the physical world, keeping them separate and monitoring their interactions with each other. The hypervisor is what allows the VM to operate as if it were on its&#039; own machine. This is done by handling any requests to resources and maintaining these requests with what has actually been provided to the VM, by the hostOS. The hostOS provides management for the VMM, as well as, allowing physical access to devices, hardware and drivers. [6]&lt;br /&gt;
&lt;br /&gt;
The VM, is what contains the OS, we are running through virtualization. [6] This OS is called the guestOS. It will only be able to access any resources that have been made available to the VM by the hostOS. [6] Otherwise, the guestOS will not know about any other resources and does not have direct access to physical hardware. This will be taken care of by the VMM. While, the guestOS will execute as its own machine, unaware of this mediator.&lt;br /&gt;
&lt;br /&gt;
There are various ways of implementing hardware virtualization in a system to allow VMs to run. This includes device emulation, paravirtualization and dedicated devices. [9]&lt;br /&gt;
&lt;br /&gt;
In device emulation the VMM provides a complete virtualization of a device for the guestOS to interact with, in the software. [9] The VMM will map this virtualized device to the physical resource and handle any interactions between them. This will usually include converting instructions from the guestOS into instructions that are compatible with the device. [9] Device emulation allows for the VM to be migrated easily to another machine as it is not dependent on the physical devices but rather, on the software emulations instead. [9] It also allows for simpler multiplexing between multiple virtual machines as it can handle sharing though these virtualized devices. [9] A drawback of emulation, however, is poor performance. This is because the VMM must handle every request and convert them to be compatible with the physical device. [9] Nonetheless, despite its poor performance, emulation is still the most common form of virtualization.&lt;br /&gt;
&lt;br /&gt;
Paravirtualization allows for a boost in performance by having the guestOS and the hostOS work together to improve performance. [9] In paravirtualization, the guestOS is not a native OS and must be modified so guestOS is aware that it is a virtualized system. [9] Since the guestOS is aware of this, it can now make better decisions about how it accesses devices. Seeing as the guestOS will be able to handle its decisions better, the VMM’s responsibility is reduced as it now does not have to translate between the guestOS and the physical devices. [9] Though the performance boost is a great advantage, there are many disadvantages to this. Some disadvantages are that you can only use paravirtualization if you can implement the modifications to the guestOS. As well, not everything can be paravirtualized and as such, this limits the cases in which this method can be used. [9] Also, every guestOS must be modified in order to be used in paravirtualization. The modifications will differ in various OS and so, there is also the task of implementing these changes to make a guestOS compatible. [9]&lt;br /&gt;
&lt;br /&gt;
Instead of virtualizing the hardware and mediating between the VMM, dedicated devices allow mapping directly to the guestOS. [9] In this method, the device will use the guestOS’s drivers instead of the hostOS’s. [9] Using this method allows the guestOS to use the hardware to its full extent, without having to deal with the VMM. This then, simplifies the VMM by eliminating the overhead by virtualizating the hardware and handling the requests to devices. [9] However, there are limited physical resources to be dedicated to a guestOS. This also makes migration difficult as the guestOS is dependent on the physical device. [9]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Exokernels==&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise between a microkernel and a VM. It can also be seen as simply dividing a monolithic kernel up into 2 parts. The management tasks of the kernel remain in the exokernel, these are the raw resource management tasks such as memory management. While the higher level abstractions such as file systems, address spaces, and interprocess communication is done at the application level[1]. These abstractions are usually provided by library OSs which allow applications to handle their own machine resources in ways not possible with the traditional kernel. Which in turn can cause large performance boosts in several areas which will be shown below.&lt;br /&gt;
&lt;br /&gt;
The exokernel walks this fine line between management and control by only providing three functions for accessing the machine&#039;s resources. It will only track ownership of resources, ensure protection by guarding all resource usage and bind points and revoke access to the resources. [1] By doing so the exokernel allows the library OSs maximum freedom over the machine&#039;s resources without allowing them to interfere with one another&#039;s resources as you would see in an unmanaged system.&lt;br /&gt;
&lt;br /&gt;
Through these three functions the exokernel can control and allow many different situations. By tracking the ownership of the resources the exokernel can export privileged instructions to the library OS so that traditional OS abstractions can be implemented as well as allowing application based resource management which is the best way to build flexible and efficient systems all while avoiding resource management except when inter-library conflict protection is required to maintain system integrity.[1] By exposing the allocation and the raw resources and their physical names to the application layer the exokernel is able to allow the library to request physical resources which removes the expensive overhead that is involved with translating the virtual names to their physical names [3] also the physical names capture more useful information and are safer and less resource intensive. [3] Finally by exposing revocation the exokernel allows well behaved library OSs to preform application level resource management [1] and allows the library OS to choose what instance of the resource to release[1].&lt;br /&gt;
&lt;br /&gt;
Not only does the exokernel benefit from it&#039;s decreased task load but the library OSs also experience several benefits compared to operating on a microkernel or VM. Library OSs running on an exokernel have a reduction in the number of kernel crossings compared to a microkernel[1] also since the library OS is not trusted by the exokernel it can then be trusted by the application. While a library OS may choose to handle low level management tasks it self, there is still a notion of portability for applications working with the library OSs such that applications that use a library OS that has implement standard interfaces, such as POSIX, will be portable on any system with the same interface [1] and a library OS can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details. [1]&lt;br /&gt;
&lt;br /&gt;
Exokernels follow the same design pattern of removing unnecessary code from within the kernel but without the same kernel to user space communication issues that microkernels experience. They also provide through their library OSs a simple yet effective way to emulate several different types of physical resource handling methods similar to virtual machines.&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of the operating system.  It manages the processes, and interrupts.  It controls the memory and drivers through process management.  The microkernel is an improved design of the original kernel.  It removed unnecessary processes from the kernel space and moved it into the user space.  This made the kernel smaller, more portable and easier to manage.  It reduced crashes caused by drivers and other problems.  Virtual machines run software that can emulate hardware for an operating system to run on.  This allows an operating system to run within an operating system, while sharing the same physical hardware.  Though, microkernels and virtual machines have their faults.  An Exokernel is seen as a compromise between microkernels and virtual machines.  An exokernel contains a condensed kernel similar to a microkernel, and has better control over libraries than a virtual machine has.  It can talk to the hardware directly, without the difficulty a microkernel has.  Computers are always evolving.  The further implementation of exokernels would make a great addition.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[8]&amp;lt;nowiki&amp;gt; Roch, Benjamin 2004. Microkernel Verses Monolithic Kernel. Vienna University of Technology, Wien, Österreich.&lt;br /&gt;
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf  &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[9]Fisher-Ogden J. 2006. Hardware Support for Efficient Virtualization. University of California, San Diego. http://cseweb.ucsd.edu/~jfisherogden/hardwareVirt.pdf&lt;br /&gt;
&lt;br /&gt;
=Misc=&lt;br /&gt;
* every thing is moved to discussion page&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_1&amp;diff=4360</id>
		<title>COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_1&amp;diff=4360"/>
		<updated>2010-10-15T02:47:04Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
To what extent can exokernels be seen as a compromise between virtual machines and microkernels? Explain how the key design characteristics of these three system architectures compare with each other.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
In Computer Science, the kernel is the component at the center of the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. We are comparing Exokernels to Microkernels and Virtual Machines by looking at how the kernel goes about such management and its connections. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels because as this design shows, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it also adds a layer of complexity within the system. This is less efficient than a real machine as it accesses the hardware indirectly. It can be observed by examining how the exokernel provides low level hardware access and provides custom abstraction to those devices. This is done in order to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extent that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
=Microkernels=&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system. An operating system could not function without the kernel.&lt;br /&gt;
A kernel is the lowest level section of an operating system. Within a system, it has the most privileges. It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access. This is also where the user can run its applications and libraries.[8] This leaves the kernel with the need to manage the other necessary processes. For example, the kernel could manage the File Systems and complete process scheduling. The kernel is layered with the most authoritative process on its lowest level.[8] A monolithic kernel, which is a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized. However, this architecture had problems. [8] If the kernel needed to be updated with more code, or a change in the system, the entire kernel would need to be compiled. Therefore, due to the amount of processes within it, it would take an inefficient amount of time. Here, a microkernel becomes practical.&lt;br /&gt;
&lt;br /&gt;
The concept of a microkernel, is to reduce the code within the kernel. The microkernel is only included in the kernel if it would impact the system. There are a variety of ways the system could be affected if a microkernel were to be implemented, for example, there would be increased performance and efficiency. [7] Furthermore, a microkernel is a kernel that has a reduced amount of mandatory software within itself. This means that it contains less software to manage, and has a reduced size.&lt;br /&gt;
&lt;br /&gt;
A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes as if the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts. [8] This new structure makes the system much more modular, and easier to provide solutions. If a driver must be patched or upgraded, the kernel does not need to be recompiled. [7] The old driver can be removed, and during the time the device waits for the system to recognize it, the operating system replaces the driver. This allows for real-time updating, that can be done while the computer is still functional. This can reduce the complete crash of the system. Therefore,if a device fails, the kernel will not crash itself, like a monolithic kernel would. The microkernel can reload the driver of the device that failed and continue functioning. [7]&lt;br /&gt;
&lt;br /&gt;
=Virtual Machines=&lt;br /&gt;
&lt;br /&gt;
A Virtual Machine, or VM, is a software abstraction of a physical machine. This entails virtualization of the physical machines resources in order to share them among OS run in the VM. Virtualizing these resources allow the OS to run as if it were on a full machine when, in reality, it is actually running in a virtualized environment on top of a hostOS. The OS is actually running on the machine, sharing the resources.&lt;br /&gt;
&lt;br /&gt;
Virtual Machines generally contain two key components; both the Virtual Machine Monitor, or VMM and the VM.&lt;br /&gt;
The VMM, also known as the hypervisor, manages the virtualization of the physical resources and the interactions with the VM running on top. [4] In other words, it mediates between the virtualized world and the physical world, keeping them separate and monitoring their interactions with each other. The hypervisor is what allows the VM to operate as if it were on its&#039; own machine. This is done by handling any requests to resources and maintaining these requests with what has actually been provided to the VM, by the hostOS. The hostOS provides management for the VMM, as well as, allowing physical access to devices, hardware and drivers. [6]&lt;br /&gt;
&lt;br /&gt;
The VM, is what contains the OS, we are running through virtualization. [6] This OS is called the guestOS. It will only be able to access any resources that have been made available to the VM by the hostOS. [6] Otherwise, the guestOS will not know about any other resources and does not have direct access to physical hardware. This will be taken care of by the VMM. While, the guestOS will execute as its own machine, unaware of this mediator.&lt;br /&gt;
&lt;br /&gt;
There are various ways of implementing hardware virtualization in a system to allow VMs to run. This includes device emulation, paravirtualization and dedicated devices. [9]&lt;br /&gt;
&lt;br /&gt;
In device emulation the VMM provides a complete virtualization of a device for the guestOS to interact with, in the software. [9] The VMM will map this virtualized device to the physical resource and handle any interactions between them. This will usually include converting instructions from the guestOS into instructions that are compatible with the device. [9] Device emulation allows for the VM to be migrated easily to another machine as it is not dependent on the physical devices but rather, on the software emulations instead. [9] It also allows for simpler multiplexing between multiple virtual machines as it can handle sharing though these virtualized devices. [9] A drawback of emulation, however, is poor performance. This is because the VMM must handle every request and convert them to be compatible with the physical device. [9] Nonetheless, despite its poor performance, emulation is still the most common form of virtualization.&lt;br /&gt;
&lt;br /&gt;
Paravirtualization allows for a boost in performance by having the guestOS and the hostOS work together to improve performance. [9] In paravirtualization, the guestOS is not a native OS and must be modified so guestOS is aware that it is a virtualized system. [9] Since the guestOS is aware of this, it can now make better decisions about how it accesses devices. Seeing as the guestOS will be able to handle its decisions better, the VMM’s responsibility is reduced as it now does not have to translate between the guestOS and the physical devices. [9] Though the performance boost is a great advantage, there are many disadvantages to this. Some disadvantages are that you can only use paravirtualization if you can implement the modifications to the guestOS. As well, not everything can be paravirtualized and as such, this limits the cases in which this method can be used. [9] Also, every guestOS must be modified in order to be used in paravirtualization. The modifications will differ in various OS and so, there is also the task of implementing these changes to make a guestOS compatible. [9]&lt;br /&gt;
&lt;br /&gt;
Instead of virtualizing the hardware and mediating between the VMM, dedicated devices allow mapping directly to the guestOS. [9] In this method, the device will use the guestOS’s drivers instead of the hostOS’s. [9] Using this method allows the guestOS to use the hardware to its full extent, without having to deal with the VMM. This then, simplifies the VMM by eliminating the overhead by virtualizating the hardware and handling the requests to devices. [9] However, there are limited physical resources to be dedicated to a guestOS. This also makes migration difficult as the guestOS is dependent on the physical device. [9]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Exokernels=&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise between a microkernel and a VM. It can also be seen as simply dividing a monolithic kernel up into 2 parts. The management tasks of the kernel remain in the exokernel, these are the raw resource management tasks such as memory management. While the higher level abstractions such as file systems, address spaces, and interprocess communication is done at the application level[1]. These abstractions are usually provided by library OSs which allow applications to handle their own machine resources in ways not possible with the traditional kernel. Which in turn can cause large performance boosts in several areas which will be shown below.&lt;br /&gt;
&lt;br /&gt;
The exokernel walks this fine line between management and control by only providing three functions for accessing the machine&#039;s resources. It will only track ownership of resources, ensure protection by guarding all resource usage and bind points and revoke access to the resources. [1] By doing so the exokernel allows the library OSs maximum freedom over the machine&#039;s resources without allowing them to interfere with one another&#039;s resources as you would see in an unmanaged system.&lt;br /&gt;
&lt;br /&gt;
Through these three functions the exokernel can control and allow many different situations. By tracking the ownership of the resources the exokernel can export privileged instructions to the library OS so that traditional OS abstractions can be implemented as well as allowing application based resource management which is the best way to build flexible and efficient systems all while avoiding resource management except when inter-library conflict protection is required to maintain system integrity.[1] By exposing the allocation and the raw resources and their physical names to the application layer the exokernel is able to allow the library to request physical resources which removes the expensive overhead that is involved with translating the virtual names to their physical names [3] also the physical names capture more useful information and are safer and less resource intensive. [3] Finally by exposing revocation the exokernel allows well behaved library OSs to preform application level resource management [1] and allows the library OS to choose what instance of the resource to release[1].&lt;br /&gt;
&lt;br /&gt;
Not only does the exokernel benefit from it&#039;s decreased task load but the library OSs also experience several benefits compared to operating on a microkernel or VM. Library OSs running on an exokernel have a reduction in the number of kernel crossings compared to a microkernel[1] also since the library OS is not trusted by the exokernel it can then be trusted by the application. While a library OS may choose to handle low level management tasks it self, there is still a notion of portability for applications working with the library OSs such that applications that use a library OS that has implement standard interfaces, such as POSIX, will be portable on any system with the same interface [1] and a library OS can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details. [1]&lt;br /&gt;
&lt;br /&gt;
Exokernels follow the same design pattern of removing unnecessary code from within the kernel but without the same kernel to user space communication issues that microkernels experience. They also provide through their library OSs a simple yet effective way to emulate several different types of physical resource handling methods similar to virtual machines.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of the operating system.  It manages the processes, and interrupts.  It controls the memory and drivers through process management.  The microkernel is an improved design of the original kernel.  It removed unnecessary processes from the kernel space and moved it into the user space.  This made the kernel smaller, more portable and easier to manage.  It reduced crashes caused by drivers and other problems.  Virtual machines run software that can emulate hardware for an operating system to run on.  This allows an operating system to run within an operating system, while sharing the same physical hardware.  Though, microkernels and virtual machines have their faults.  An Exokernel is seen as a compromise between microkernels and virtual machines.  An exokernel contains a condensed kernel similar to a microkernel, and has better control over libraries than a virtual machine has.  It can talk to the hardware directly, without the difficulty a microkernel has.  Computers are always evolving.  The further implementation of exokernels would make a great addition.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[8]&amp;lt;nowiki&amp;gt; Roch, Benjamin 2004. Microkernel Verses Monolithic Kernel. Vienna University of Technology, Wien, Österreich.&lt;br /&gt;
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf  &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[9]Fisher-Ogden J. 2006. Hardware Support for Efficient Virtualization. University of California, San Diego. http://cseweb.ucsd.edu/~jfisherogden/hardwareVirt.pdf&lt;br /&gt;
&lt;br /&gt;
=Misc=&lt;br /&gt;
* every thing is moved to discussion page&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_1&amp;diff=4355</id>
		<title>COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_1&amp;diff=4355"/>
		<updated>2010-10-15T02:41:55Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* Answer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
To what extent can exokernels be seen as a compromise between virtual machines and microkernels? Explain how the key design characteristics of these three system architectures compare with each other.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
In Computer Science, the kernel is the component at the center of the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. We are comparing Exokernels to Microkernels and Virtual Machines by looking at how the kernel goes about such management and its connections. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels because as this design shows, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it also adds a layer of complexity within the system. This is less efficient than a real machine as it accesses the hardware indirectly. It can be observed by examining how the exokernel provides low level hardware access and provides custom abstraction to those devices. This is done in order to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extent that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
=Microkernels=&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system. An operating system could not function without the kernel.&lt;br /&gt;
A kernel is the lowest level section of an operating system. Within a system, it has the most privileges. It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access. This is also where the user can run its applications and libraries.[8] This leaves the kernel with the need to manage the other necessary processes. For example, the kernel could manage the File Systems and complete process scheduling. The kernel is layered with the most authoritative process on its lowest level.[8] A monolithic kernel, which is a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized. However, this architecture had problems. [8] If the kernel needed to be updated with more code, or a change in the system, the entire kernel would need to be compiled. Therefore, due to the amount of processes within it, it would take an inefficient amount of time. Here, a microkernel becomes practical.&lt;br /&gt;
&lt;br /&gt;
The concept of a microkernel, is to reduce the code within the kernel. The microkernel is only included in the kernel if it would impact the system. There are a variety of ways the system could be affected if a microkernel were to be implemented, for example, there would be increased performance and efficiency. [7] Furthermore, a microkernel is a kernel that has a reduced amount of mandatory software within itself. This means that it contains less software to manage, and has a reduced size.&lt;br /&gt;
&lt;br /&gt;
A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes as if the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts. [8] This new structure makes the system much more modular, and easier to provide solutions. If a driver must be patched or upgraded, the kernel does not need to be recompiled. [7] The old driver can be removed, and during the time the device waits for the system to recognize it, the operating system replaces the driver. This allows for real-time updating, that can be done while the computer is still functional. This can reduce the complete crash of the system. Therefore,if a device fails, the kernel will not crash itself, like a monolithic kernel would. The microkernel can reload the driver of the device that failed and continue functioning. [7]&lt;br /&gt;
&lt;br /&gt;
=Virtual Machines=&lt;br /&gt;
&lt;br /&gt;
A Virtual Machine, or VM, is a software abstraction of a physical machine. This entails virtualization of the physical machines resources in order to share them among OS run in the VM. Virtualizing these resources allow the OS to run as if it were on a full machine when, in reality, it is actually running in a virtualized environment on top of a hostOS. The OS is actually running on the machine, sharing the resources.&lt;br /&gt;
&lt;br /&gt;
Virtual Machines generally contain two key components; both the Virtual Machine Monitor, or VMM and the VM.&lt;br /&gt;
The VMM, also known as the hypervisor, manages the virtualization of the physical resources and the interactions with the VM running on top. [4] In other words, it mediates between the virtualized world and the physical world, keeping them separate and monitoring their interactions with each other. The hypervisor is what allows the VM to operate as if it were on its&#039; own machine. This is done by handling any requests to resources and maintaining these requests with what has actually been provided to the VM, by the hostOS. The hostOS provides management for the VMM, as well as, allowing physical access to devices, hardware and drivers. [6]&lt;br /&gt;
&lt;br /&gt;
The VM, is what contains the OS, we are running through virtualization. [6] This OS is called the guestOS. It will only be able to access any resources that have been made available to the VM by the hostOS. [6] Otherwise, the guestOS will not know about any other resources and does not have direct access to physical hardware. This will be taken care of by the VMM. While, the guestOS will execute as its own machine, unaware of this mediator.&lt;br /&gt;
&lt;br /&gt;
There are various ways of implementing hardware virtualization in a system to allow VMs to run. This includes device emulation, paravirtualization and dedicated devices. [9]&lt;br /&gt;
&lt;br /&gt;
In device emulation the VMM provides a complete virtualization of a device for the guestOS to interact with, in the software. [9] The VMM will map this virtualized device to the physical resource and handle any interactions between them. This will usually include converting instructions from the guestOS into instructions that are compatible with the device. [9] Device emulation allows for the VM to be migrated easily to another machine as it is not dependent on the physical devices but rather, on the software emulations instead. [9] It also allows for simpler multiplexing between multiple virtual machines as it can handle sharing though these virtualized devices. [9] A drawback of emulation, however, is poor performance. This is because the VMM must handle every request and convert them to be compatible with the physical device. [9] Nonetheless, despite its poor performance, emulation is still the most common form of virtualization.&lt;br /&gt;
&lt;br /&gt;
Paravirtualization allows for a boost in performance by having the guestOS and the hostOS work together to improve performance. [9] In paravirtualization, the guestOS is not a native OS and must be modified so guestOS is aware that it is a virtualized system. [9] Since the guestOS is aware of this, it can now make better decisions about how it accesses devices. Seeing as the guestOS will be able to handle its decisions better, the VMM’s responsibility is reduced as it now does not have to translate between the guestOS and the physical devices. [9] Though the performance boost is a great advantage, there are many disadvantages to this. Some disadvantages are that you can only use paravirtualization if you can implement the modifications to the guestOS. As well, not everything can be paravirtualized and as such, this limits the cases in which this method can be used. [9] Also, every guestOS must be modified in order to be used in paravirtualization. The modifications will differ in various OS and so, there is also the task of implementing these changes to make a guestOS compatible. [9]&lt;br /&gt;
&lt;br /&gt;
Instead of virtualizing the hardware and mediating between the VMM, dedicated devices allow mapping directly to the guestOS. [9] In this method, the device will use the guestOS’s drivers instead of the hostOS’s. [9] Using this method allows the guestOS to use the hardware to its full extent, without having to deal with the VMM. This then, simplifies the VMM by eliminating the overhead by virtualizating the hardware and handling the requests to devices. [9] However, there are limited physical resources to be dedicated to a guestOS. This also makes migration difficult as the guestOS is dependent on the physical device. [9]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Exokernels=&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise between a microkernel and a VM. It can also be seen as simply dividing a monolithic kernel up into 2 parts. The management tasks of the kernel remain in the exokernel, these are the raw resource management tasks such as memory management. While the higher level abstractions such as file systems, address spaces, and interprocess communication is done at the application level[1]. These abstractions are usually provided by library OSs which allow applications to handle their own machine resources in ways not possible with the traditional kernel. Which in turn can cause large performance boosts in several areas which will be shown below.&lt;br /&gt;
&lt;br /&gt;
The exokernel walks this fine line between management and control by only providing three functions for accessing the machine&#039;s resources. It will only track ownership of resources, ensure protection by guarding all resource usage and bind points and revoke access to the resources. [1] By doing so the exokernel allows the library OSs maximum freedom over the machine&#039;s resources without allowing them to interfere with one another&#039;s resources as you would see in an unmanaged system.&lt;br /&gt;
&lt;br /&gt;
Through these three functions the exokernel can control and allow many different situations. By tracking the ownership of the resources the exokernel can export privileged instructions to the library OS so that traditional OS abstractions can be implemented as well as allowing application based resource management which is the best way to build flexible and efficient systems all while avoiding resource management except when inter-library conflict protection is required to maintain system integrity.[1] By exposing the allocation and the raw resources and their physical names to the application layer the exokernel is able to allow the library to request physical resources which removes the expensive overhead that is involved with translating the virtual names to their physical names [3] also the physical names capture more useful information and are safer and less resource intensive. [3] Finally by exposing revocation the exokernel allows well behaved library OSs to preform application level resource management [1] and allows the library OS to choose what instance of the resource to release[1].&lt;br /&gt;
&lt;br /&gt;
Not only does the exokernel benefit from it&#039;s decreased task load but the library OSs also experience several benefits compared to operating on a microkernel or VM. Library OSs running on an exokernel have a reduction in the number of kernel crossings compared to a microkernel[1] also since the library OS is not trusted by the exokernel it can then be trusted by the application. While a library OS may choose to handle low level management tasks it self, there is still a notion of portability for applications working with the library OSs such that applications that use a library OS that has implement standard interfaces, such as POSIX, will be portable on any system with the same interface [1] and a library OS can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details. [1]&lt;br /&gt;
&lt;br /&gt;
Exokernels follow the same design pattern of removing unnecessary code from within the kernel but without the same kernel to user space communication issues that microkernels experience. They also provide through their library OSs a simple yet effective way to emulate several different types of physical resource handling methods similar to virtual machines.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of the operating system.  It manages the processes, and interrupts.  It controls the memory and drivers through process management.  The microkernel is an improved design of the original kernel.  It removed unnecessary processes from the kernel space and moved it into the user space.  This made the kernel smaller, more portable and easier to manage.  It reduced crashes caused by drivers and other problems.  Virtual machines run software that can emulate hardware for an operating system to run on.  This allows an operating system to run within an operating system, while sharing the same physical hardware.  Though, microkernels and virtual machines have their faults.  An Exokernel is seen as a compromise between microkernels and virtual machines.  An exokernel contains a condensed kernel similar to a microkernel, and has better control over libraries than a virtual machine has.  It can talk to the hardware directly, without the difficulty a microkernel has.  Computers are always evolving.  The further implementation of exokernels would make a great addition.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
=Misc=&lt;br /&gt;
* every thing is moved to discussion page&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_1&amp;diff=4316</id>
		<title>COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_1&amp;diff=4316"/>
		<updated>2010-10-15T01:43:51Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* Answer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
To what extent can exokernels be seen as a compromise between virtual machines and microkernels? Explain how the key design characteristics of these three system architectures compare with each other.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
In Computer Science, the kernel is the component at the center of the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. We are comparing Exokernels to Microkernels and Virtual Machines by looking at how the kernel goes about such management and its connections. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels because as this design shows, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it also adds a layer of complexity within the system. This is less efficient than a real machine as it accesses the hardware indirectly. It can be observed by examining how the exokernel provides low level hardware access and provides custom abstraction to those devices. This is done in order to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extent that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system. An operating system could not function without the kernel.&lt;br /&gt;
A kernel is the lowest level section of an operating system. Within a system, it has the most privileges. It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access. This is also where the user can run its applications and libraries.[8] This leaves the kernel with the need to manage the other necessary processes. For example, the kernel could manage the File Systems and complete process scheduling. The kernel is layered with the most authoritative process on its lowest level.[8] A monolithic kernel, which is a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized. However, this architecture had problems. [8] If the kernel needed to be updated with more code, or a change in the system, the entire kernel would need to be compiled. Therefore, due to the amount of processes within it, it would take an inefficient amount of time. Here, a microkernel becomes practical.&lt;br /&gt;
&lt;br /&gt;
The concept of a microkernel, is to reduce the code within the kernel. The microkernel is only included in the kernel if it would impact the system. There are a variety of ways the system could be affected if a microkernel were to be implemented, for example, there would be increased performance and efficiency. [7] Furthermore, a microkernel is a kernel that has a reduced amount of mandatory software within itself. This means that it contains less software to manage, and has a reduced size.&lt;br /&gt;
&lt;br /&gt;
A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes as if the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts. [8] This new structure makes the system much more modular, and easier to provide solutions. If a driver must be patched or upgraded, the kernel does not need to be recompiled. [7] The old driver can be removed, and during the time the device waits for the system to recognize it, the operating system replaces the driver. This allows for real-time updating, that can be done while the computer is still functional. This can reduce the complete crash of the system. Therefore,if a device fails, the kernel will not crash itself, like a monolithic kernel would. The microkernel can reload the driver of the device that failed and continue functioning. [7]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A Virtual Machine, or VM, is a software abstraction of a physical machine. This entails virtualization of the physical machines resources in order to share them among OS run in the VM. Virtualizing these resources allow the OS to run as if it were on a full machine when, in reality, it is actually running in a virtualized environment on top of a hostOS. The OS is actually running on the machine, sharing the resources.&lt;br /&gt;
&lt;br /&gt;
Virtual Machines generally contain two key components; both the Virtual Machine Monitor, or VMM and the VM.&lt;br /&gt;
The VMM, also known as the hypervisor, manages the virtualization of the physical resources and the interactions with the VM running on top. [4] In other words, it mediates between the virtualized world and the physical world, keeping them separate and monitoring their interactions with each other. The hypervisor is what allows the VM to operate as if it were on its&#039; own machine. This is done by handling any requests to resources and maintaining these requests with what has actually been provided to the VM, by the hostOS. The hostOS provides management for the VMM, as well as, allowing physical access to devices, hardware and drivers. [6]&lt;br /&gt;
&lt;br /&gt;
The VM, is what contains the OS, we are running through virtualization. [6] This OS is called the guestOS. It will only be able to access any resources that have been made available to the VM by the hostOS. [6] Otherwise, the guestOS will not know about any other resources and does not have direct access to physical hardware. This will be taken care of by the VMM. While, the guestOS will execute as its own machine, unaware of this mediator.&lt;br /&gt;
&lt;br /&gt;
There are various ways of implementing hardware virtualization in a system to allow VMs to run. This includes device emulation, paravirtualization and dedicated devices. [9]&lt;br /&gt;
&lt;br /&gt;
In device emulation the VMM provides a complete virtualization of a device for the guestOS to interact with, in the software. [9] The VMM will map this virtualized device to the physical resource and handle any interactions between them. This will usually include converting instructions from the guestOS into instructions that are compatible with the device. [9] Device emulation allows for the VM to be migrated easily to another machine as it is not dependent on the physical devices but rather, on the software emulations instead. [9] It also allows for simpler multiplexing between multiple virtual machines as it can handle sharing though these virtualized devices. [9] A drawback of emulation, however, is poor performance. This is because the VMM must handle every request and convert them to be compatible with the physical device. [9] Nonetheless, despite its poor performance, emulation is still the most common form of virtualization.&lt;br /&gt;
&lt;br /&gt;
Paravirtualization allows for a boost in performance by having the guestOS and the hostOS work together to improve performance. [9] In paravirtualization, the guestOS is not a native OS and must be modified so guestOS is aware that it is a virtualized system. [9] Since the guestOS is aware of this, it can now make better decisions about how it accesses devices. Seeing as the guestOS will be able to handle its decisions better, the VMM’s responsibility is reduced as it now does not have to translate between the guestOS and the physical devices. [9] Though the performance boost is a great advantage, there are many disadvantages to this. Some disadvantages are that you can only use paravirtualization if you can implement the modifications to the guestOS. As well, not everything can be paravirtualized and as such, this limits the cases in which this method can be used. [9] Also, every guestOS must be modified in order to be used in paravirtualization. The modifications will differ in various OS and so, there is also the task of implementing these changes to make a guestOS compatible. [9]&lt;br /&gt;
&lt;br /&gt;
Instead of virtualizing the hardware and mediating between the VMM, dedicated devices allow mapping directly to the guestOS. [9] In this method, the device will use the guestOS’s drivers instead of the hostOS’s. [9] Using this method allows the guestOS to use the hardware to its full extent, without having to deal with the VMM. This then, simplifies the VMM by eliminating the overhead by virtualizating the hardware and handling the requests to devices. [9] However, there are limited physical resources to be dedicated to a guestOS. This also makes migration difficult as the guestOS is dependent on the physical device. [9]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise between a microkernel and a VM. It can also be seen as simply dividing a monolithic kernel up into 2 parts. The management tasks of the kernel remain in the exokernel, these are the raw resource management tasks such as memory management. While the higher level abstractions such as file systems, address spaces, and interprocess communication is done at the application level[1]. These abstractions are usually provided by library OSs which allow applications to handle their own machine resources in ways not possible with the traditional kernel. Which in turn can cause large performance boosts in several areas which will be shown below.&lt;br /&gt;
&lt;br /&gt;
The exokernel walks this fine line between management and control by only providing three functions for accessing the machine&#039;s resources. It will only track ownership of resources, ensure protection by guarding all resource usage and bind points and revoke access to the resources. [1] By doing so the exokernel allows the library OSs maximum freedom over the machine&#039;s resources without allowing them to interfere with one another&#039;s resources as you would see in an unmanaged system.&lt;br /&gt;
&lt;br /&gt;
Through these three functions the exokernel can control and allow many different situations. By tracking the ownership of the resources the exokernel can export privileged instructions to the library OS so that traditional OS abstractions can be implemented as well as allowing application based resource management which is the best way to build flexible and efficient systems all while avoiding resource management except when inter-library conflict protection is required to maintain system integrity.[1] By exposing the allocation and the raw resources and their physical names to the application layer the exokernel is able to allow the library to request physical resources which removes the expensive overhead that is involved with translating the virtual names to their physical names [3] also the physical names capture more useful information and are safer and less resource intensive. [3] Finally by exposing revocation the exokernel allows well behaved library OSs to preform application level resource management [1] and allows the library OS to choose what instance of the resource to release[1].&lt;br /&gt;
&lt;br /&gt;
Not only does the exokernel benefit from it&#039;s decreased task load but the library OSs also experience several benefits compared to operating on a microkernel or VM. Library OSs running on an exokernel have a reduction in the number of kernel crossings compared to a microkernel[1] also since the library OS is not trusted by the exokernel it can then be trusted by the application. While a library OS may choose to handle low level management tasks it self, there is still a notion of portability for applications working with the library OSs such that applications that use a library OS that has implement standard interfaces, such as POSIX, will be portable on any system with the same interface [1] and a library OS can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details. [1]&lt;br /&gt;
&lt;br /&gt;
Exokernels follow the same design pattern of removing unnecessary code from within the kernel but without the same kernel to user space communication issues that microkernels experience. They also provide through their library OSs a simple yet effective way to emulate several different types of physical resource handling methods similar to virtual machines.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
- Conclusion - Coming soon&lt;br /&gt;
&lt;br /&gt;
=Misc=&lt;br /&gt;
* every thing is moved to discussion page&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_1&amp;diff=4314</id>
		<title>COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_1&amp;diff=4314"/>
		<updated>2010-10-15T01:43:27Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* Answer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
To what extent can exokernels be seen as a compromise between virtual machines and microkernels? Explain how the key design characteristics of these three system architectures compare with each other.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
In Computer Science, the kernel is the component at the center of the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. We are comparing Exokernels to Microkernels and Virtual Machines by looking at how the kernel goes about such management and its connections. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels because as this design shows, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it also adds a layer of complexity within the system. This is less efficient than a real machine as it accesses the hardware indirectly. It can be observed by examining how the exokernel provides low level hardware access and provides custom abstraction to those devices. This is done in order to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extent that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system. An operating system could not function without the kernel.&lt;br /&gt;
A kernel is the lowest level section of an operating system. Within a system, it has the most privileges. It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access. This is also where the user can run its applications and libraries.[8] This leaves the kernel with the need to manage the other necessary processes. For example, the kernel could manage the File Systems and complete process scheduling. The kernel is layered with the most authoritative process on its lowest level.[8] A monolithic kernel, which is a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized. However, this architecture had problems. [8] If the kernel needed to be updated with more code, or a change in the system, the entire kernel would need to be compiled. Therefore, due to the amount of processes within it, it would take an inefficient amount of time. Here, a microkernel becomes practical.&lt;br /&gt;
&lt;br /&gt;
The concept of a microkernel, is to reduce the code within the kernel. The microkernel is only included in the kernel if it would impact the system. There are a variety of ways the system could be affected if a microkernel were to be implemented, for example, there would be increased performance and efficiency. [7] Furthermore, a microkernel is a kernel that has a reduced amount of mandatory software within itself. This means that it contains less software to manage, and has a reduced size.&lt;br /&gt;
&lt;br /&gt;
A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes as if the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts. [8] This new structure makes the system much more modular, and easier to provide solutions. If a driver must be patched or upgraded, the kernel does not need to be recompiled. [7] The old driver can be removed, and during the time the device waits for the system to recognize it, the operating system replaces the driver. This allows for real-time updating, that can be done while the computer is still functional. This can reduce the complete crash of the system. Therefore,if a device fails, the kernel will not crash itself, like a monolithic kernel would. The microkernel can reload the driver of the device that failed and continue functioning. [7]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A Virtual Machine, or VM, is a software abstraction of a physical machine. This entails virtualization of the physical machines resources in order to share them among OS run in the VM. Virtualizing these resources allow the OS to run as if it were on a full machine when, in reality, it is actually running in a virtualized environment on top of a hostOS. The OS is actually running on the machine, sharing the resources.&lt;br /&gt;
&lt;br /&gt;
Virtual Machines generally contain two key components; both the Virtual Machine Monitor, or VMM and the VM.&lt;br /&gt;
The VMM, also known as the hypervisor, manages the virtualization of the physical resources and the interactions with the VM running on top. [4] In other words, it mediates between the virtualized world and the physical world, keeping them separate and monitoring their interactions with each other. The hypervisor is what allows the VM to operate as if it were on its&#039; own machine. This is done by handling any requests to resources and maintaining these requests with what has actually been provided to the VM, by the hostOS. The hostOS provides management for the VMM, as well as, allowing physical access to devices, hardware and drivers. [6]&lt;br /&gt;
&lt;br /&gt;
The VM, is what contains the OS, we are running through virtualization. [6] This OS is called the guestOS. It will only be able to access any resources that have been made available to the VM by the hostOS. [6] Otherwise, the guestOS will not know about any other resources and does not have direct access to physical hardware. This will be taken care of by the VMM. While, the guestOS will execute as its own machine, unaware of this mediator.&lt;br /&gt;
&lt;br /&gt;
There are various ways of implementing hardware virtualization in a system to allow VMs to run. This includes device emulation, paravirtualization and dedicated devices. [9]&lt;br /&gt;
&lt;br /&gt;
In device emulation the VMM provides a complete virtualization of a device for the guestOS to interact with, in the software. [9] The VMM will map this virtualized device to the physical resource and handle any interactions between them. This will usually include converting instructions from the guestOS into instructions that are compatible with the device. [9] Device emulation allows for the VM to be migrated easily to another machine as it is not dependent on the physical devices but rather, on the software emulations instead. [9] It also allows for simpler multiplexing between multiple virtual machines as it can handle sharing though these virtualized devices. [9] A drawback of emulation, however, is poor performance. This is because the VMM must handle every request and convert them to be compatible with the physical device. [9] Nonetheless, despite its poor performance, emulation is still the most common form of virtualization.&lt;br /&gt;
&lt;br /&gt;
Paravirtualization allows for a boost in performance by having the guestOS and the hostOS work together to improve performance. [9] In paravirtualization, the guestOS is not a native OS and must be modified so guestOS is aware that it is a virtualized system. [9] Since the guestOS is aware of this, it can now make better decisions about how it accesses devices. Seeing as the guestOS will be able to handle its decisions better, the VMM’s responsibility is reduced as it now does not have to translate between the guestOS and the physical devices. [9] Though the performance boost is a great advantage, there are many disadvantages to this. Some disadvantages are that you can only use paravirtualization if you can implement the modifications to the guestOS. As well, not everything can be paravirtualized and as such, this limits the cases in which this method can be used. [9] Also, every guestOS must be modified in order to be used in paravirtualization. The modifications will differ in various OS and so, there is also the task of implementing these changes to make a guestOS compatible. [9]&lt;br /&gt;
&lt;br /&gt;
Instead of virtualizing the hardware and mediating between the VMM, dedicated devices allow mapping directly to the guestOS. [9] In this method, the device will use the guestOS’s drivers instead of the hostOS’s. [9] Using this method allows the guestOS to use the hardware to its full extent, without having to deal with the VMM. This then, simplifies the VMM by eliminating the overhead by virtualizating the hardware and handling the requests to devices. [9] However, there are limited physical resources to be dedicated to a guestOS. This also makes migration difficult as the guestOS is dependent on the physical device. [9]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise between a microkernel and a VM. It can also be seen as simply dividing a monolithic kernel up into 2 parts. The management tasks of the kernel remain in the exokernel, these are the raw resource management tasks such as memory management. While the higher level abstractions such as file systems, address spaces, and interprocess communication is done at the application level[1]. These abstractions are usually provided by library OSs which allow applications to handle their own machine resources in ways not possible with the traditional kernel. Which in turn can cause large performance boosts in several areas which will be shown below.&lt;br /&gt;
&lt;br /&gt;
The exokernel walks this fine line between management and control by only providing three functions for accessing the machine&#039;s resources. It will only track ownership of resources, ensure protection by guarding all resource usage and bind points and revoke access to the resources. [1] By doing so the exokernel allows the library OSs maximum freedom over the machine&#039;s resources without allowing them to interfere with one another&#039;s resources as you would see in an unmanaged system.&lt;br /&gt;
&lt;br /&gt;
Through these three functions the exokernel can control and allow many different situations. By tracking the ownership of the resources the exokernel can export privileged instructions to the library OS so that traditional OS abstractions can be implemented as well as allowing application based resource management which is the best way to build flexible and efficient systems all while avoiding resource management except when inter-library conflict protection is required to maintain system integrity.[1] By exposing the allocation and the raw resources and their physical names to the application layer the exokernel is able to allow the library to request physical resources which removes the expensive overhead that is involved with translating the virtual names to their physical names [3] also the physical names capture more useful information and are safer and less resource intensive. [3] Finally by exposing revocation the exokernel allows well behaved library OSs to preform application level resource management [1] and allows the library OS to choose what instance of the resource to release[1].&lt;br /&gt;
&lt;br /&gt;
Not only does the exokernel benefit from it&#039;s decreased task load but the library OSs also experience several benefits compared to operating on a microkernel or VM. Library OSs running on an exokernel have a reduction in the number of kernel crossings compared to a microkernel[1] also since the library OS is not trusted by the exokernel it can then be trusted by the application. While a library OS may choose to handle low level management tasks it self, there is still a notion of portability for applications working with the library OSs such that applications that use a library OS that has implement standard interfaces, such as POSIX, will be portable on any system with the same interface [1] and a library OS can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details. [1]&lt;br /&gt;
&lt;br /&gt;
Exokernels follow the same design pattern of removing unnecessary code from within the kernel but without the same kernel to user space communication issues that microkernels experience. They also provide through their library OSs a simple yet effective way to emulate several different types of physical resource handling methods similar to virtual machines.&lt;br /&gt;
&lt;br /&gt;
=Misc=&lt;br /&gt;
* every thing is moved to discussion page&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_1&amp;diff=4313</id>
		<title>COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=COMP_3000_Essay_1_2010_Question_1&amp;diff=4313"/>
		<updated>2010-10-15T01:41:57Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* Answer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Question=&lt;br /&gt;
To what extent can exokernels be seen as a compromise between virtual machines and microkernels? Explain how the key design characteristics of these three system architectures compare with each other.&lt;br /&gt;
&lt;br /&gt;
=Answer=&lt;br /&gt;
In Computer Science, the kernel is the component at the center of the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. We are comparing Exokernels to Microkernels and Virtual Machines by looking at how the kernel goes about such management and its connections. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels because as this design shows, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it also adds a layer of complexity within the system. This is less efficient than a real machine as it accesses the hardware indirectly. It can be observed by examining how the exokernel provides low level hardware access and provides custom abstraction to those devices. This is done in order to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extent that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system. An operating system could not function without the kernel.&lt;br /&gt;
A kernel is the lowest level section of an operating system. Within a system, it has the most privileges. It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access. This is also where the user can run its applications and libraries.[8] This leaves the kernel with the need to manage the other necessary processes. For example, the kernel could manage the File Systems and complete process scheduling. The kernel is layered with the most authoritative process on its lowest level.[8] A monolithic kernel, which is a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized. However, this architecture had problems. [8] If the kernel needed to be updated with more code, or a change in the system, the entire kernel would need to be compiled. Therefore, due to the amount of processes within it, it would take an inefficient amount of time. Here, a microkernel becomes practical.&lt;br /&gt;
The concept of a microkernel, is to reduce the code within the kernel. The microkernel is only included in the kernel if it would impact the system. There are a variety of ways the system could be affected if a microkernel were to be implemented, for example, there would be increased performance and efficiency. [7] Furthermore, a microkernel is a kernel that has a reduced amount of mandatory software within itself. This means that it contains less software to manage, and has a reduced size.&lt;br /&gt;
A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes as if the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts. [8] This new structure makes the system much more modular, and easier to provide solutions. If a driver must be patched or upgraded, the kernel does not need to be recompiled. [7] The old driver can be removed, and during the time the device waits for the system to recognize it, the operating system replaces the driver. This allows for real-time updating, that can be done while the computer is still functional. This can reduce the complete crash of the system. Therefore,if a device fails, the kernel will not crash itself, like a monolithic kernel would. The microkernel can reload the driver of the device that failed and continue functioning. [7]&lt;br /&gt;
Want more on the scheduling? I can do that if wanted. -key note on exokernel&#039;s mutiplexing vs microkernel&#039;s messaging, exo more efficient so perhaps running with the idea that messaging b/w processes not necessarily the ideal way need to also start outlaying weaknesses in the design as well in order to play up the idea that an exokernel just does it better -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A Virtual Machine, or VM, is a software abstraction of a physical machine. This entails virtualization of the physical machines resources in order to share them among OS run in the VM. Virtualizing these resources allow the OS to run as if it were on a full machine when, in reality, it is actually running in a virtualized environment on top of a hostOS. The OS is actually running on the machine, sharing the resources.&lt;br /&gt;
Virtual Machines generally contain two key components; both the Virtual Machine Monitor, or VMM and the VM.&lt;br /&gt;
The VMM, also known as the hypervisor, manages the virtualization of the physical resources and the interactions with the VM running on top. [4] In other words, it mediates between the virtualized world and the physical world, keeping them separate and monitoring their interactions with each other. The hypervisor is what allows the VM to operate as if it were on its&#039; own machine. This is done by handling any requests to resources and maintaining these requests with what has actually been provided to the VM, by the hostOS. The hostOS provides management for the VMM, as well as, allowing physical access to devices, hardware and drivers. [6]&lt;br /&gt;
The VM, is what contains the OS, we are running through virtualization. [6] This OS is called the guestOS. It will only be able to access any resources that have been made available to the VM by the hostOS. [6] Otherwise, the guestOS will not know about any other resources and does not have direct access to physical hardware. This will be taken care of by the VMM. While, the guestOS will execute as its own machine, unaware of this mediator.&lt;br /&gt;
There are various ways of implementing hardware virtualization in a system to allow VMs to run. This includes device emulation, paravirtualization and dedicated devices. [9]&lt;br /&gt;
In device emulation the VMM provides a complete virtualization of a device for the guestOS to interact with, in the software. [9] The VMM will map this virtualized device to the physical resource and handle any interactions between them. This will usually include converting instructions from the guestOS into instructions that are compatible with the device. [9] Device emulation allows for the VM to be migrated easily to another machine as it is not dependent on the physical devices but rather, on the software emulations instead. [9] It also allows for simpler multiplexing between multiple virtual machines as it can handle sharing though these virtualized devices. [9] A drawback of emulation, however, is poor performance. This is because the VMM must handle every request and convert them to be compatible with the physical device. [9] Nonetheless, despite its poor performance, emulation is still the most common form of virtualization.&lt;br /&gt;
Paravirtualization allows for a boost in performance by having the guestOS and the hostOS work together to improve performance. [9] In paravirtualization, the guestOS is not a native OS and must be modified so guestOS is aware that it is a virtualized system. [9] Since the guestOS is aware of this, it can now make better decisions about how it accesses devices. Seeing as the guestOS will be able to handle its decisions better, the VMM’s responsibility is reduced as it now does not have to translate between the guestOS and the physical devices. [9] Though the performance boost is a great advantage, there are many disadvantages to this. Some disadvantages are that you can only use paravirtualization if you can implement the modifications to the guestOS. As well, not everything can be paravirtualized and as such, this limits the cases in which this method can be used. [9] Also, every guestOS must be modified in order to be used in paravirtualization. The modifications will differ in various OS and so, there is also the task of implementing these changes to make a guestOS compatible. [9]&lt;br /&gt;
Instead of virtualizing the hardware and mediating between the VMM, dedicated devices allow mapping directly to the guestOS. [9] In this method, the device will use the guestOS’s drivers instead of the hostOS’s. [9] Using this method allows the guestOS to use the hardware to its full extent, without having to deal with the VMM. This then, simplifies the VMM by eliminating the overhead by virtualizating the hardware and handling the requests to devices. [9] However, there are limited physical resources to be dedicated to a guestOS. This also makes migration difficult as the guestOS is dependent on the physical device. [9]&lt;br /&gt;
notes - it ended up being quite lengthy. I mainly focused on the device virtualization rather than the architecture of a VM (like x86 virtualization). I&#039;ll put up my notes for the paper I found for virtualization. I didn&#039;t talk about Xen or VMware though. If any of that is needed, I can try to continue working on it tonight but I have another priority.&lt;br /&gt;
-try focusing on the emulation side of VM where emulation&#039;s weaknesses vs direct hardware access or custom abstraction that exokernels -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise between a microkernel and a VM. It can also be seen as simply dividing a monolithic kernel up into 2 parts. The management tasks of the kernel remain in the exokernel, these are the raw resource management tasks such as memory management. While the higher level abstractions such as file systems, address spaces, and interprocess communication is done at the application level[1]. These abstractions are usually provided by library OSs which allow applications to handle their own machine resources in ways not possible with the traditional kernel. Which in turn can cause large performance boosts in several areas which will be shown below.&lt;br /&gt;
The exokernel walks this fine line between management and control by only providing three functions for accessing the machine&#039;s resources. It will only track ownership of resources, ensure protection by guarding all resource usage and bind points and revoke access to the resources. [1] By doing so the exokernel allows the library OSs maximum freedom over the machine&#039;s resources without allowing them to interfere with one another&#039;s resources as you would see in an unmanaged system.&lt;br /&gt;
Through these three functions the exokernel can control and allow many different situations. By tracking the ownership of the resources the exokernel can export privileged instructions to the library OS so that traditional OS abstractions can be implemented as well as allowing application based resource management which is the best way to build flexible and efficient systems all while avoiding resource management except when inter-library conflict protection is required to maintain system integrity.[1] By exposing the allocation and the raw resources and their physical names to the application layer the exokernel is able to allow the library to request physical resources which removes the expensive overhead that is involved with translating the virtual names to their physical names [3] also the physical names capture more useful information and are safer and less resource intensive. [3] Finally by exposing revocation the exokernel allows well behaved library OSs to preform application level resource management [1] and allows the library OS to choose what instance of the resource to release[1].&lt;br /&gt;
Not only does the exokernel benefit from it&#039;s decreased task load but the library OSs also experience several benefits compared to operating on a microkernel or VM. Library OSs running on an exokernel have a reduction in the number of kernel crossings compared to a microkernel[1] also since the library OS is not trusted by the exokernel it can then be trusted by the application. While a library OS may choose to handle low level management tasks it self, there is still a notion of portability for applications working with the library OSs such that applications that use a library OS that has implement standard interfaces, such as POSIX, will be portable on any system with the same interface [1] and a library OS can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details. [1]&lt;br /&gt;
Exokernels follow the same design pattern of removing unnecessary code from within the kernel but without the same kernel to user space communication issues that microkernels experience. They also provide through their library OSs a simple yet effective way to emulate several different types of physical resource handling methods similar to virtual machines.&lt;br /&gt;
&lt;br /&gt;
=Misc=&lt;br /&gt;
* every thing is moved to discussion page&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=4185</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=4185"/>
		<updated>2010-10-14T23:43:32Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* The Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
==== ====&lt;br /&gt;
(Not complete but most of article 9)&lt;br /&gt;
Classical Virtualization&lt;br /&gt;
* VMMs allow programs in virtual environments to run natively other than resource usage&lt;br /&gt;
** Dominant instructions executed directly on cpu&lt;br /&gt;
** vmm completely controls system resources&lt;br /&gt;
** often need to emulate every native instruction which would severely effect the performance&lt;br /&gt;
** sensitive instruction that violate safety and encapsulation&lt;br /&gt;
** vmm handles them as priviledged instructions&lt;br /&gt;
&lt;br /&gt;
x86 Virtualization&lt;br /&gt;
* virtualization in personal work stations rather than mainframes&lt;br /&gt;
** rings that allow isolation between virtual machines&lt;br /&gt;
** most privileged in ring 0 and least in ring 3. The operating system runs in ring 0 and user apps in ring 3&lt;br /&gt;
*** vmm in ring 0 and vms in lesser privilege rings (1 or 3)&lt;br /&gt;
*** guestOS believes its in ring 0&lt;br /&gt;
* address space compression, where to run the VMM&lt;br /&gt;
** if run using guest address space, guest can find out its virtualized or compromise the isolation&lt;br /&gt;
* does not trap all sensitive instructions but can handle them, violates classical virtualization description&lt;br /&gt;
* some privileged access fail without faulting&lt;br /&gt;
* interrupt virtualization - VMM handles AND guestOS handles&lt;br /&gt;
* binary translation - improve performance&lt;br /&gt;
* rewriting instructions and trapping before problems arrise&lt;br /&gt;
&lt;br /&gt;
Paravirtualization&lt;br /&gt;
* guestOS become exposed to vm information so that the guest is aware that it is virtualized and can make decisions based on this&lt;br /&gt;
* allows to avoid problem instructions&lt;br /&gt;
* Xen&lt;br /&gt;
* guestOS must be modified and is not natively running&lt;br /&gt;
**works with the hostOS to run efficiently&lt;br /&gt;
&lt;br /&gt;
VMM types&lt;br /&gt;
* hostedVMM - executes in hostOS and uses the drivers and support of the OS&lt;br /&gt;
* Stand-aloneVMM - runs directly on hardware and uses it&#039;s own drivers and services&lt;br /&gt;
* hybridVMM - runs a serviceOS where requests to hardware go through (I/O)&lt;br /&gt;
&lt;br /&gt;
Device Emulation&lt;br /&gt;
* implement real hardware in software&lt;br /&gt;
* completely virtual device that the guest interacts with&lt;br /&gt;
* mapped to physical hardware that handles the interactions but the emulation allows conversion&lt;br /&gt;
* allows the vm to be easily migrated between machines as it does not rely on the physical hardware&lt;br /&gt;
* allows having multiple vms and simplifies sharing (multiplexing)&lt;br /&gt;
* poor performance as the vmm needs to do a lot to virtulize the machine&lt;br /&gt;
&lt;br /&gt;
Paravirtualization&lt;br /&gt;
* modified guestOS to cooperate with VMM &lt;br /&gt;
* VMM does not have to do everything to handle device drivers&lt;br /&gt;
* not everything can be paravirtualized&lt;br /&gt;
* proprietary os and device drivers can&#039;t be paravirtualized&lt;br /&gt;
* still allows an increase in performance&lt;br /&gt;
* eventing or callback mechanism&lt;br /&gt;
** guestOS modifies interrupt mechs&lt;br /&gt;
* modifications are not applicable to all guestOS&lt;br /&gt;
&lt;br /&gt;
Dedicated Devices&lt;br /&gt;
* does not virtualize device but assigns directly to guest vm&lt;br /&gt;
* uses guest&#039;s drivers instead of host&lt;br /&gt;
* simplifiest vmm by removing handing of i/o securily&lt;br /&gt;
* limited physical devices that can be dedicated&lt;br /&gt;
* dificult to migrate vm as it depends on the pairing with this resource&lt;br /&gt;
* elims over-head of virtualization and simplicity in vmm&lt;br /&gt;
* direct memory access not supported&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[8]&amp;lt;nowiki&amp;gt;Microkernel verses monolithic kernel&lt;br /&gt;
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf  - Roch&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I will site it/reference it better later&lt;br /&gt;
&lt;br /&gt;
[9]Fisher-Ogden J. 2006. Hardware Support for Efficient Virtualization. University of California, San Diego. http://cseweb.ucsd.edu/~jfisherogden/hardwareVirt.pdf&lt;br /&gt;
&lt;br /&gt;
Not completely sure of the citation style used above.&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
Corey Ling&lt;br /&gt;
Steph Lay&lt;br /&gt;
Aaron .L&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
In Computer Science, the kernel is the component at the center of  the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. We are comparing Exokernels to Microkernels and Virtual Machines by looking at how the kernel goes about such management and its connections. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels because as this design shows, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it also adds a layer of complexity within the system. This is less efficient than a real machine as it accesses the hardware indirectly. It can be observed by examining how the exokernel provides low level hardware access and provides custom abstraction to those devices. This is done in order to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extent that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system. An operating system could not function without the kernel.  &lt;br /&gt;
&lt;br /&gt;
A kernel is the lowest level section of an operating system.   Within a system, it has the most privileges.  It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access. This is also where the user can run its applications and libraries.[8]  This leaves the kernel with the need to manage the other necessary processes. For example, the kernel could manage the File Systems and complete process scheduling.  The kernel is layered with the most authoritative process on its lowest level.[8]  A monolithic kernel, which is a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized.  However, this architecture had problems. [8]  If the kernel needed to be updated with more code, or a change in the system, the entire kernel would need to be compiled. Therefore, due to the amount of processes within it, it would take an inefficient amount of time.  Here, a microkernel becomes practical.&lt;br /&gt;
&lt;br /&gt;
The concept of a microkernel, is to reduce  the code within the kernel. The microkernel is only included in the kernel if it would impact the system. There are a variety of ways the system could be affected if a microkernel were to be implemented, for example, there would be increased performance and efficiency. [7] Furthermore, a microkernel is a kernel that has a reduced amount of mandatory software within itself.  This means that it contains less software to manage, and has a reduced size.  &lt;br /&gt;
&lt;br /&gt;
A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes as if the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts. [8] This new structure makes the system much more modular, and easier to provide solutions.  If a driver must be patched or upgraded, the kernel does not need to be recompiled. [7] The old driver can be removed, and during the time the device waits for the system to recognize it, the operating system replaces the driver.  This allows for real-time updating, that can be done while the computer is still functional.  This can reduce the complete crash of the system.  Therefore,if a device fails, the kernel will not crash itself, like a monolithic kernel would.  The microkernel can reload the driver of the device that failed and continue functioning.  [7]  &lt;br /&gt;
&lt;br /&gt;
Want more on the scheduling?  I can do that if wanted. -key note on exokernel&#039;s mutiplexing vs microkernel&#039;s messaging, exo more efficient so perhaps running with the idea that messaging b/w processes not necessarily the ideal way need to also start outlaying weaknesses in the design as well in order to play up the idea that an exokernel just does it better -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -Steph L.&lt;br /&gt;
&lt;br /&gt;
A Virtual Machine, or VM, is a software abstraction of a physical machine. This entails virtualization of the physical machines resources in order to share them among OS run in the VM. Virtualizing these resources allow the OS to run as if it were on a full machine when, in reality, it is actually running in a virtualized environment on top of a hostOS. The OS is actually running on the machine, sharing the resources.&lt;br /&gt;
&lt;br /&gt;
Virtual Machines generally contain two key components; both the Virtual Machine Monitor, or VMM and the VM.&lt;br /&gt;
&lt;br /&gt;
The VMM, also known as the hypervisor, manages the virtualization of the physical resources and the interactions with the VM running on top. [4] In other words, it mediates between the virtualized world and the physical world, keeping them separate and monitoring their interactions with each other. The hypervisor is what allows the VM to operate as if it were on its&#039; own machine. This is done by handling any requests to resources and maintaining these requests with what has actually been provided to the VM, by the hostOS.  The hostOS provides management for the VMM, as well as, allowing physical access to devices, hardware and drivers. [6]&lt;br /&gt;
&lt;br /&gt;
The VM, is what contains the OS, we are running through virtualization. [6] This OS is called the guestOS. It will only be able to access any resources that have been made available to the VM by the hostOS. [6] Otherwise, the guestOS will not know about any other resources and does not have direct access to physical hardware. This will be taken care of by the VMM. While, the guestOS will execute as its own machine, unaware of this mediator.&lt;br /&gt;
&lt;br /&gt;
There are various ways of implementing hardware virtualization in a system to allow VMs to run. This includes device emulation, paravirtualization and dedicated devices. [9]&lt;br /&gt;
&lt;br /&gt;
In device emulation the VMM provides a complete virtualization of a device for the guestOS to interact with, in the software. [9] The VMM will map this virtualized device to the physical resource and handle any interactions between them. This will usually include converting instructions from the guestOS into instructions that are compatible with the device. [9] Device emulation allows for the VM to be migrated easily to another machine as it is not dependent on the physical devices but rather, on the software emulations instead. [9] It also allows for simpler multiplexing between multiple virtual machines as it can handle sharing though these virtualized devices. [9] A drawback of emulation, however, is poor performance. This is because the VMM must handle every request and convert them to be compatible with the physical device. [9] Nonetheless, despite its poor performance, emulation is still the most common form of virtualization.&lt;br /&gt;
&lt;br /&gt;
Paravirtualization allows for a boost in performance by having the guestOS and the hostOS work together to improve performance. [9] In paravirtualization, the guestOS is not a native OS and must be modified so guestOS is aware that it is a virtualized system. [9] Since the guestOS is aware of this, it can now make better decisions about how it accesses devices. Seeing as the guestOS will be able to handle its decisions better, the VMM’s responsibility is reduced as it now does not have to translate between the guestOS and the physical devices. [9] Though the performance boost is a great advantage, there are many disadvantages to this. Some disadvantages are that you can only use paravirtualization if you can implement the modifications to the guestOS. As well, not everything can be paravirtualized and as such, this limits the cases in which this method can be used. [9] Also, every guestOS must be modified in order to be used in paravirtualization. The modifications will differ in various OS and so, there is also the task of implementing these changes to make a guestOS compatible. [9]&lt;br /&gt;
&lt;br /&gt;
Instead of virtualizing the hardware and mediating between the VMM, dedicated devices allow mapping directly to the guestOS. [9] In this method, the device will use the guestOS’s drivers instead of the hostOS’s. [9] Using this method allows the guestOS to use the hardware to its full extent, without having to deal with the VMM. This then, simplifies the VMM by eliminating the overhead by virtualizating the hardware and handling the requests to devices. [9] However, there are limited physical resources to be dedicated to a guestOS. This also makes migration difficult as the guestOS is dependent on the physical device. [9]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;notes&#039;&#039;&#039;&lt;br /&gt;
- it ended up being quite lengthy. I mainly focused on the device virtualization rather than the architecture of a VM (like x86 virtualization). I&#039;ll put up my notes for the paper I found for virtualization. I didn&#039;t talk about Xen or VMware though. If any of that is needed, I can try to continue working on it tonight but I have another priority.&lt;br /&gt;
&lt;br /&gt;
-try focusing on the emulation side of VM where emulation&#039;s weaknesses vs direct hardware access or custom abstraction that exokernels -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise between a microkernel and a VM. It can also be seen as simply dividing a monolithic kernel up into 2 parts. The management tasks of the kernel remain in the exokernel, these are the raw resource management tasks such as memory management. While the higher level abstractions such as file systems, address spaces, and interprocess communication is done at the application level[1]. These abstractions are usually provided by library OSs which allow applications to handle their own machine resources in ways not possible with the traditional kernel. Which in turn can cause large performance boosts in several areas which will be shown below. &lt;br /&gt;
&lt;br /&gt;
The exokernel walks this fine line between management and control by only providing three functions for accessing the machine&#039;s resources. It will only track ownership of resources, ensure protection by guarding all resource usage and bind points and revoke access to the resources. [1] By doing so the exokernel allows the library OSs maximum freedom over the machine&#039;s resources without allowing them to interfere with one another&#039;s resources as you would see in an unmanaged system.&lt;br /&gt;
&lt;br /&gt;
Through these three functions the exokernel can control and allow many different situations. By tracking the ownership of the resources the exokernel can export privileged instructions to the library OS so that traditional OS abstractions can be implemented as well as allowing application based resource management which is the best way to build flexible and efficient systems all while avoiding resource management except when inter-library conflict protection is required to maintain system integrity.[1] By exposing the allocation and the raw resources and their physical names to the application layer the exokernel is able to allow the library to request physical resources which removes the expensive overhead that is involved with translating the virtual names to their physical names [3] also the physical names capture more useful information and are safer and less resource intensive. [3] Finally by exposing revocation the exokernel allows well behaved library OSs to preform application level resource management [1] and allows the library OS to choose what instance of the resource to release[1].&lt;br /&gt;
&lt;br /&gt;
Not only does the exokernel benefit from it&#039;s decreased task load but the library OSs also experience several benefits compared to operating on a microkernel or VM. Library OSs running on an exokernel have a reduction in the number of kernel crossings compared to a microkernel[1] also since the library OS is not trusted by the exokernel it can then be trusted by the application. While a library OS may choose to handle low level management tasks it self, there is still a notion of portability for applications working with the library OSs such that applications that use a library OS that has implement standard interfaces, such as POSIX, will be portable on any system with the same interface [1] and a library OS can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details. [1]&lt;br /&gt;
&lt;br /&gt;
Exokernels follow the same design pattern of removing unnecessary code from within the kernel but without the same kernel to user space communication issues that microkernels experience. They also provide through their library OSs a simple yet effective way to emulate several different types of physical resource handling methods similar to virtual machines.&lt;br /&gt;
&lt;br /&gt;
Paragraph 4 - Contrast/Compromise --[[User:Asoknack|Asoknack]]   -- We may not need this.  Corey did a good job with the Exokernel and incorporated the information and its compromise of the two systems&lt;br /&gt;
&lt;br /&gt;
Conclusion - Jon S.   -  Only a sentence per paragraph, excluding Intro&lt;br /&gt;
&lt;br /&gt;
Sweet.  Looks like we got it covered.  We should read each others parts and put suggestions and edits. One of us should try and change it to one style if there are contradictions. And to put it on the main page.  We can figure that out tomorrow.  - Jon S&lt;br /&gt;
&lt;br /&gt;
Once the other parts are up and you see anything you know of as a good reference to back it up, put the link so we can use it. -Slade&lt;br /&gt;
&lt;br /&gt;
I made some edits to the first two paragraphs. I just reworded some of the unclear sentences and some grammatical errors. I&#039;ll work on editing more of it after comp 3007. Also when all the parts are up i can go through it and link the paragraphs together so it can be read more like an essay  --[[User:Aellebla|Aellebla]] 15:18, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
So far so good, if you find some sentences that are off, go ahead and correct them, just note to us in here that you&#039;ve made changes. Almost done guys! -Slade&lt;br /&gt;
&lt;br /&gt;
Awesome Steph!  Also, Awesome Corey, sounds sweet, looks good-JSlonosky&lt;br /&gt;
&lt;br /&gt;
==Potential Test Questions==&lt;br /&gt;
&lt;br /&gt;
Add potential test questions here:&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=4150</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=4150"/>
		<updated>2010-10-14T22:54:24Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* The Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
==== ====&lt;br /&gt;
(Not complete but most of article 9)&lt;br /&gt;
Classical Virtualization&lt;br /&gt;
* VMMs allow programs in virtual environments to run natively other than resource usage&lt;br /&gt;
** Dominant instructions executed directly on cpu&lt;br /&gt;
** vmm completely controls system resources&lt;br /&gt;
** often need to emulate every native instruction which would severely effect the performance&lt;br /&gt;
** sensitive instruction that violate safety and encapsulation&lt;br /&gt;
** vmm handles them as priviledged instructions&lt;br /&gt;
&lt;br /&gt;
x86 Virtualization&lt;br /&gt;
* virtualization in personal work stations rather than mainframes&lt;br /&gt;
** rings that allow isolation between virtual machines&lt;br /&gt;
** most privileged in ring 0 and least in ring 3. The operating system runs in ring 0 and user apps in ring 3&lt;br /&gt;
*** vmm in ring 0 and vms in lesser privilege rings (1 or 3)&lt;br /&gt;
*** guestOS believes its in ring 0&lt;br /&gt;
* address space compression, where to run the VMM&lt;br /&gt;
** if run using guest address space, guest can find out its virtualized or compromise the isolation&lt;br /&gt;
* does not trap all sensitive instructions but can handle them, violates classical virtualization description&lt;br /&gt;
* some privileged access fail without faulting&lt;br /&gt;
* interrupt virtualization - VMM handles AND guestOS handles&lt;br /&gt;
* binary translation - improve performance&lt;br /&gt;
* rewriting instructions and trapping before problems arrise&lt;br /&gt;
&lt;br /&gt;
Paravirtualization&lt;br /&gt;
* guestOS become exposed to vm information so that the guest is aware that it is virtualized and can make decisions based on this&lt;br /&gt;
* allows to avoid problem instructions&lt;br /&gt;
* Xen&lt;br /&gt;
* guestOS must be modified and is not natively running&lt;br /&gt;
**works with the hostOS to run efficiently&lt;br /&gt;
&lt;br /&gt;
VMM types&lt;br /&gt;
* hostedVMM - executes in hostOS and uses the drivers and support of the OS&lt;br /&gt;
* Stand-aloneVMM - runs directly on hardware and uses it&#039;s own drivers and services&lt;br /&gt;
* hybridVMM - runs a serviceOS where requests to hardware go through (I/O)&lt;br /&gt;
&lt;br /&gt;
Device Emulation&lt;br /&gt;
* implement real hardware in software&lt;br /&gt;
* completely virtual device that the guest interacts with&lt;br /&gt;
* mapped to physical hardware that handles the interactions but the emulation allows conversion&lt;br /&gt;
* allows the vm to be easily migrated between machines as it does not rely on the physical hardware&lt;br /&gt;
* allows having multiple vms and simplifies sharing (multiplexing)&lt;br /&gt;
* poor performance as the vmm needs to do a lot to virtulize the machine&lt;br /&gt;
&lt;br /&gt;
Paravirtualization&lt;br /&gt;
* modified guestOS to cooperate with VMM &lt;br /&gt;
* VMM does not have to do everything to handle device drivers&lt;br /&gt;
* not everything can be paravirtualized&lt;br /&gt;
* proprietary os and device drivers can&#039;t be paravirtualized&lt;br /&gt;
* still allows an increase in performance&lt;br /&gt;
* eventing or callback mechanism&lt;br /&gt;
** guestOS modifies interrupt mechs&lt;br /&gt;
* modifications are not applicable to all guestOS&lt;br /&gt;
&lt;br /&gt;
Dedicated Devices&lt;br /&gt;
* does not virtualize device but assigns directly to guest vm&lt;br /&gt;
* uses guest&#039;s drivers instead of host&lt;br /&gt;
* simplifiest vmm by removing handing of i/o securily&lt;br /&gt;
* limited physical devices that can be dedicated&lt;br /&gt;
* dificult to migrate vm as it depends on the pairing with this resource&lt;br /&gt;
* elims over-head of virtualization and simplicity in vmm&lt;br /&gt;
* direct memory access not supported&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[8]&amp;lt;nowiki&amp;gt;Microkernel verses monolithic kernel&lt;br /&gt;
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf  - Roch&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I will site it/reference it better later&lt;br /&gt;
&lt;br /&gt;
[9]Fisher-Ogden J. 2006. Hardware Support for Efficient Virtualization. University of California, San Diego. http://cseweb.ucsd.edu/~jfisherogden/hardwareVirt.pdf&lt;br /&gt;
&lt;br /&gt;
Not completely sure of the citation style used above.&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
Corey Ling&lt;br /&gt;
Steph Lay&lt;br /&gt;
Aaron .L&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
In Computer Science, the kernel is the component at the center of  the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. We are comparing Exokernels to Microkernels and Virtual Machines by looking at how the kernel goes about such management and its connections. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels because as this design shows, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it also adds a layer of complexity within the system. This is less efficient than a real machine as it accesses the hardware indirectly. It can be observed by examining how the exokernel provides low level hardware access and provides custom abstraction to those devices. This is done in order to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extent that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system. An operating system could not function without the kernel.  &lt;br /&gt;
&lt;br /&gt;
A kernel is the lowest level section of an operating system.   Within a system, it has the most privileges.  It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access. This is also where the user can run its applications and libraries.[8]  This leaves the kernel with the need to manage the other necessary processes. For example, the kernel could manage the File Systems and complete process scheduling.  The kernel is layered with the most authoritative process on its lowest level.[8]  A monolithic kernel, which is a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized.  However, this architecture had problems. [8]  If the kernel needed to be updated with more code, or a change in the system, the entire kernel would need to be compiled. Therefore, due to the amount of processes within it, it would take an inefficient amount of time.  Here, a microkernel becomes practical.&lt;br /&gt;
&lt;br /&gt;
The concept of a microkernel, is to reduce  the code within the kernel. The microkernel is only included in the kernel if it would impact the system. There are a variety of ways the system could be affected if a microkernel were to be implemented, for example, there would be increased performance and efficiency. [7] Furthermore, a microkernel is a kernel that has a reduced amount of mandatory software within itself.  This means that it contains less software to manage, and has a reduced size.  &lt;br /&gt;
&lt;br /&gt;
A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes as if the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts. [8] This new structure makes the system much more modular, and easier to provide solutions.  If a driver must be patched or upgraded, the kernel does not need to be recompiled. [7] The old driver can be removed, and during the time the device waits for the system to recognize it, the operating system replaces the driver.  This allows for real-time updating, that can be done while the computer is still functional.  This can reduce the complete crash of the system.  Therefore,if a device fails, the kernel will not crash itself, like a monolithic kernel would.  The microkernel can reload the driver of the device that failed and continue functioning.  [7]  &lt;br /&gt;
&lt;br /&gt;
Want more on the scheduling?  I can do that if wanted. -key note on exokernel&#039;s mutiplexing vs microkernel&#039;s messaging, exo more efficient so perhaps running with the idea that messaging b/w processes not necessarily the ideal way need to also start outlaying weaknesses in the design as well in order to play up the idea that an exokernel just does it better -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -Steph L.&lt;br /&gt;
&lt;br /&gt;
A Virtual Machine, or VM, is a software abstraction of a physical machine. This entails virtualization of the physical machines resources in order to share them among OS run in the VM. Virtualizing these resources allow the OS to run as if it were on a full machine when, in reality, it is actually running in a virtualized environment on top of a hostOS. The OS is actually running on the machine, sharing the resources.&lt;br /&gt;
&lt;br /&gt;
Virtual Machines generally contain two key components; both the Virtual Machine Monitor, or VMM and the VM.&lt;br /&gt;
&lt;br /&gt;
The VMM, also known as the hypervisor, manages the virtualization of the physical resources and the interactions with the VM running on top. [4] In other words, it mediates between the virtualized world and the physical world, keeping them separate and monitoring their interactions with each other. The hypervisor is what allows the VM to operate as if it were on its&#039; own machine. This is done by handling any requests to resources and maintaining these requests with what has actually been provided to the VM, by the hostOS.  The hostOS provides management for the VMM, as well as, allowing physical access to devices, hardware and drivers. [6]&lt;br /&gt;
&lt;br /&gt;
The VM, is what contains the OS, we are running through virtualization. [6] This OS is called the guestOS. It will only be able to access any resources that have been made available to the VM by the hostOS. [6] Otherwise, the guestOS will not know about any other resources and does not have direct access to physical hardware. This will be taken care of by the VMM. While, the guestOS will execute as its own machine, unaware of this mediator.&lt;br /&gt;
&lt;br /&gt;
There are various ways of implementing hardware virtualization in a system to allow VMs to run. This includes device emulation, paravirtualization and dedicated devices. [9]&lt;br /&gt;
&lt;br /&gt;
In device emulation the VMM provides a complete virtualization of a device for the guestOS to interact with, in the software. [9] The VMM will map this virtualized device to the physical resource and handle any interactions between them. This will usually include converting instructions from the guestOS into instructions that are compatible with the device. [9] Device emulation allows for the VM to be migrated easily to another machine as it is not dependent on the physical devices but rather, on the software emulations instead. [9] It also allows for simpler multiplexing between multiple virtual machines as it can handle sharing though these virtualized devices. [9] A drawback of emulation, however, is poor performance. This is because the VMM must handle every request and convert them to be compatible with the physical device. [9] Nonetheless, despite its poor performance, emulation is still the most common form of virtualization.&lt;br /&gt;
&lt;br /&gt;
Paravirtualization allows for a boost in performance by having the guestOS and the hostOS work together to improve performance. [9] In paravirtualization, the guestOS is not a native OS and must be modified so guestOS is aware that it is a virtualized system. [9] Since the guestOS is aware of this, it can now make better decisions about how it accesses devices. Seeing as the guestOS will be able to handle its decisions better, the VMM’s responsibility is reduced as it now does not have to translate between the guestOS and the physical devices. [9] Though the performance boost is a great advantage, there are many disadvantages to this. Some disadvantages are that you can only use paravirtualization if you can implement the modifications to the guestOS. As well, not everything can be paravirtualized and as such, this limits the cases in which this method can be used. [9] Also, every guestOS must be modified in order to be used in paravirtualization. The modifications will differ in various OS and so, there is also the task of implementing these changes to make a guestOS compatible. [9]&lt;br /&gt;
&lt;br /&gt;
Instead of virtualizing the hardware and mediating between the VMM, dedicated devices allow mapping directly to the guestOS. [9] In this method, the device will use the guestOS’s drivers instead of the hostOS’s. [9] Using this method allows the guestOS to use the hardware to its full extent, without having to deal with the VMM. This then, simplifies the VMM by eliminating the overhead by virtualizating the hardware and handling the requests to devices. [9] However, there are limited physical resources to be dedicated to a guestOS. This also makes migration difficult as the guestOS is dependent on the physical device. [9]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;notes&#039;&#039;&#039;&lt;br /&gt;
- it ended up being quite lengthy. I mainly focused on the device virtualization rather than the architecture of a VM (like x86 virtualization). I&#039;ll put up my notes for the paper I found for virtualization. I didn&#039;t talk about Xen or VMware though. If any of that is needed, I can try to continue working on it tonight but I have another priority.&lt;br /&gt;
&lt;br /&gt;
-try focusing on the emulation side of VM where emulation&#039;s weaknesses vs direct hardware access or custom abstraction that exokernels -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L&lt;br /&gt;
&lt;br /&gt;
(This is only half, I have a Bell Tech here working on my internet. I will finish when he leaves)&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise between a microkernel and a VM. It can also be seen as simply dividing a monolithic kernel up into 2 parts. The management tasks of the kernel remain in the exokernel, these are the raw resource management tasks such as memory management. While the higher level abstractions such as file systems, address spaces, and interprocess communication is done at the application level[1]. These abstractions are usually provided by library OSs which allow applications to handle their own machine resources in ways not possible with the traditional kernel. Which in turn can cause large performance boosts in several areas which will be shown below. &lt;br /&gt;
&lt;br /&gt;
The exokernel walks this fine line between management and control by only providing three functions for accessing the machine&#039;s resources. It will only track ownership of resources, ensure protection by guarding all resource usage and bind points and revoke access to the resources. [1] By doing so the exokernel allows the library OSs maximum freedom over the machine&#039;s resources without allowing them to interfere with one another&#039;s resources as you would see in an unmanaged system.&lt;br /&gt;
&lt;br /&gt;
Through these three functions the exokernel can control and allow many different situations. By tracking the ownership of the resources the exokernel can export privileged instructions to the library OS so that traditional OS abstractions can be implemented as well as allowing application based resource management which is the best way to build flexible and efficient systems all while avoiding resource management except when inter-library conflict protection is required to maintain system integrity.[1] By exposing the allocation and the raw resources and their physical names to the application layer the exokernel is able to allow LibOs to request physical resources &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 4 - Contrast/Compromise --[[User:Asoknack|Asoknack]]&lt;br /&gt;
&lt;br /&gt;
Conclusion - Jon S.   -  Only a sentence per paragraph, excluding Intro&lt;br /&gt;
&lt;br /&gt;
Sweet.  Looks like we got it covered.  We should read each others parts and put suggestions and edits. One of us should try and change it to one style if there are contradictions. And to put it on the main page.  We can figure that out tomorrow.  - Jon S&lt;br /&gt;
&lt;br /&gt;
Once the other parts are up and you see anything you know of as a good reference to back it up, put the link so we can use it. -Slade&lt;br /&gt;
&lt;br /&gt;
I made some edits to the first two paragraphs. I just reworded some of the unclear sentences and some grammatical errors. I&#039;ll work on editing more of it after comp 3007. Also when all the parts are up i can go through it and link the paragraphs together so it can be read more like an essay  --[[User:Aellebla|Aellebla]] 15:18, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
So far so good, if you find some sentences that are off, go ahead and correct them, just note to us in here that you&#039;ve made changes. Almost done guys! -Slade&lt;br /&gt;
&lt;br /&gt;
Awesome Steph!  Also, Awesome Corey, sounds sweet, looks good-JSlonosky&lt;br /&gt;
&lt;br /&gt;
==Potential Test Questions==&lt;br /&gt;
&lt;br /&gt;
Add potential test questions here:&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=4076</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=4076"/>
		<updated>2010-10-14T20:45:33Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* The Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
==== ====&lt;br /&gt;
(Not complete but most of article 9)&lt;br /&gt;
Classical Virtualization&lt;br /&gt;
* VMMs allow programs in virtual environments to run natively other than resource usage&lt;br /&gt;
** Dominant instructions executed directly on cpu&lt;br /&gt;
** vmm completely controls system resources&lt;br /&gt;
** often need to emulate every native instruction which would severely effect the performance&lt;br /&gt;
** sensitive instruction that violate safety and encapsulation&lt;br /&gt;
** vmm handles them as priviledged instructions&lt;br /&gt;
&lt;br /&gt;
x86 Virtualization&lt;br /&gt;
* virtualization in personal work stations rather than mainframes&lt;br /&gt;
** rings that allow isolation between virtual machines&lt;br /&gt;
** most privileged in ring 0 and least in ring 3. The operating system runs in ring 0 and user apps in ring 3&lt;br /&gt;
*** vmm in ring 0 and vms in lesser privilege rings (1 or 3)&lt;br /&gt;
*** guestOS believes its in ring 0&lt;br /&gt;
* address space compression, where to run the VMM&lt;br /&gt;
** if run using guest address space, guest can find out its virtualized or compromise the isolation&lt;br /&gt;
* does not trap all sensitive instructions but can handle them, violates classical virtualization description&lt;br /&gt;
* some privileged access fail without faulting&lt;br /&gt;
* interrupt virtualization - VMM handles AND guestOS handles&lt;br /&gt;
* binary translation - improve performance&lt;br /&gt;
* rewriting instructions and trapping before problems arrise&lt;br /&gt;
&lt;br /&gt;
Paravirtualization&lt;br /&gt;
* guestOS become exposed to vm information so that the guest is aware that it is virtualized and can make decisions based on this&lt;br /&gt;
* allows to avoid problem instructions&lt;br /&gt;
* Xen&lt;br /&gt;
* guestOS must be modified and is not natively running&lt;br /&gt;
**works with the hostOS to run efficiently&lt;br /&gt;
&lt;br /&gt;
VMM types&lt;br /&gt;
* hostedVMM - executes in hostOS and uses the drivers and support of the OS&lt;br /&gt;
* Stand-aloneVMM - runs directly on hardware and uses it&#039;s own drivers and services&lt;br /&gt;
* hybridVMM - runs a serviceOS where requests to hardware go through (I/O)&lt;br /&gt;
&lt;br /&gt;
Device Emulation&lt;br /&gt;
* implement real hardware in software&lt;br /&gt;
* completely virtual device that the guest interacts with&lt;br /&gt;
* mapped to physical hardware that handles the interactions but the emulation allows conversion&lt;br /&gt;
* allows the vm to be easily migrated between machines as it does not rely on the physical hardware&lt;br /&gt;
* allows having multiple vms and simplifies sharing (multiplexing)&lt;br /&gt;
* poor performance as the vmm needs to do a lot to virtulize the machine&lt;br /&gt;
&lt;br /&gt;
Paravirtualization&lt;br /&gt;
* modified guestOS to cooperate with VMM &lt;br /&gt;
* VMM does not have to do everything to handle device drivers&lt;br /&gt;
* not everything can be paravirtualized&lt;br /&gt;
* proprietary os and device drivers can&#039;t be paravirtualized&lt;br /&gt;
* still allows an increase in performance&lt;br /&gt;
* eventing or callback mechanism&lt;br /&gt;
** guestOS modifies interrupt mechs&lt;br /&gt;
* modifications are not applicable to all guestOS&lt;br /&gt;
&lt;br /&gt;
Dedicated Devices&lt;br /&gt;
* does not virtualize device but assigns directly to guest vm&lt;br /&gt;
* uses guest&#039;s drivers instead of host&lt;br /&gt;
* simplifiest vmm by removing handing of i/o securily&lt;br /&gt;
* limited physical devices that can be dedicated&lt;br /&gt;
* dificult to migrate vm as it depends on the pairing with this resource&lt;br /&gt;
* elims over-head of virtualization and simplicity in vmm&lt;br /&gt;
* direct memory access not supported&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[8]&amp;lt;nowiki&amp;gt;Microkernel verses monolithic kernel&lt;br /&gt;
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf  - Roch&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I will site it/reference it better later&lt;br /&gt;
&lt;br /&gt;
[9]Fisher-Ogden J. 2006. Hardware Support for Efficient Virtualization. University of California, San Diego. http://cseweb.ucsd.edu/~jfisherogden/hardwareVirt.pdf&lt;br /&gt;
&lt;br /&gt;
Not completely sure of the citation style used above.&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
Corey Ling&lt;br /&gt;
Steph Lay&lt;br /&gt;
Aaron .L&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
In Computer Science, the kernel is the component at the center of  the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. We are comparing Exokernels to Microkernels and Virtual Machines by looking at how the kernel goes about such management and its connections. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels because as this design shows, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it also adds a layer of complexity within the system. This is less efficient than a real machine as it accesses the hardware indirectly. It can be observed by examining how the exokernel provides low level hardware access and provides custom abstraction to those devices. This is done in order to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extent that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system. An operating system could not function without the kernel.  &lt;br /&gt;
&lt;br /&gt;
A kernel is the lowest level section of an operating system.   Within a system, it has the most privileges.  It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access. This is also where the user can run its applications and libraries.[8]  This leaves the kernel with the need to manage the other necessary processes. For example, the kernel could manage the File Systems and complete process scheduling.  The kernel is layered with the most authoritative process on its lowest level.[8]  A monolithic kernel, which is a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized.  However, this architecture had problems. [8]  If the kernel needed to be updated with more code, or a change in the system, the entire kernel would need to be compiled. Therefore, due to the amount of processes within it, it would take an inefficient amount of time.  Here, a microkernel becomes practical.&lt;br /&gt;
&lt;br /&gt;
The concept of a microkernel, is to reduce  the code within the kernel. The microkernel is only included in the kernel if it would impact the system. There are a variety of ways the system could be affected if a microkernel were to be implemented, for example, there would be increased performance and efficiency. [7] Furthermore, a microkernel is a kernel that has a reduced amount of mandatory software within itself.  This means that it contains less software to manage, and has a reduced size.  &lt;br /&gt;
&lt;br /&gt;
A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes as if the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts. [8] This new structure makes the system much more modular, and easier to provide solutions.  If a driver must be patched or upgraded, the kernel does not need to be recompiled. [7] The old driver can be removed, and during the time the device waits for the system to recognize it, the operating system replaces the driver.  This allows for real-time updating, that can be done while the computer is still functional.  This can reduce the complete crash of the system.  Therefore,if a device fails, the kernel will not crash itself, like a monolithic kernel would.  The microkernel can reload the driver of the device that failed and continue functioning.  [7]  &lt;br /&gt;
&lt;br /&gt;
Want more on the scheduling?  I can do that if wanted. -key note on exokernel&#039;s mutiplexing vs microkernel&#039;s messaging, exo more efficient so perhaps running with the idea that messaging b/w processes not necessarily the ideal way need to also start outlaying weaknesses in the design as well in order to play up the idea that an exokernel just does it better -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -Steph L.&lt;br /&gt;
&lt;br /&gt;
A Virtual Machine, or VM, is a software abstraction of a physical machine. This entails virtualization of the physical machines resources in order to share them among OS run in the VM. Virtualizing these resources allow the OS to run as if it were on a full machine when, in reality, it is actually running in a virtualized environment on top of a hostOS. The OS is actually running on the machine, sharing the resources.&lt;br /&gt;
&lt;br /&gt;
Virtual Machines generally contain two key components; both the Virtual Machine Monitor, or VMM and the VM.&lt;br /&gt;
&lt;br /&gt;
The VMM, also known as the hypervisor, manages the virtualization of the physical resources and the interactions with the VM running on top. [4] In other words, it mediates between the virtualized world and the physical world, keeping them separate and monitoring their interactions with each other. The hypervisor is what allows the VM to operate as if it were on its&#039; own machine. This is done by handling any requests to resources and maintaining these requests with what has actually been provided to the VM, by the hostOS.  The hostOS provides management for the VMM, as well as, allowing physical access to devices, hardware and drivers. [6]&lt;br /&gt;
&lt;br /&gt;
The VM, is what contains the OS, we are running through virtualization. [6] This OS is called the guestOS. It will only be able to access any resources that have been made available to the VM by the hostOS. [6] Otherwise, the guestOS will not know about any other resources and does not have direct access to physical hardware. This will be taken care of by the VMM. While, the guestOS will execute as its own machine, unaware of this mediator.&lt;br /&gt;
&lt;br /&gt;
There are various ways of implementing hardware virtualization in a system to allow VMs to run. This includes device emulation, paravirtualization and dedicated devices. [9]&lt;br /&gt;
&lt;br /&gt;
In device emulation the VMM provides a complete virtualization of a device for the guestOS to interact with, in the software. [9] The VMM will map this virtualized device to the physical resource and handle any interactions between them. This will usually include converting instructions from the guestOS into instructions that are compatible with the device. [9] Device emulation allows for the VM to be migrated easily to another machine as it is not dependent on the physical devices but rather, on the software emulations instead. [9] It also allows for simpler multiplexing between multiple virtual machines as it can handle sharing though these virtualized devices. [9] A drawback of emulation, however, is poor performance. This is because the VMM must handle every request and convert them to be compatible with the physical device. [9] Nonetheless, despite its poor performance, emulation is still the most common form of virtualization.&lt;br /&gt;
&lt;br /&gt;
Paravirtualization allows for a boost in performance by having the guestOS and the hostOS work together to improve performance. [9] In paravirtualization, the guestOS is not a native OS and must be modified so guestOS is aware that it is a virtualized system. [9] Since the guestOS is aware of this, it can now make better decisions about how it accesses devices. Seeing as the guestOS will be able to handle its decisions better, the VMM’s responsibility is reduced as it now does not have to translate between the guestOS and the physical devices. [9] Though the performance boost is a great advantage, there are many disadvantages to this. Some disadvantages are that you can only use paravirtualization if you can implement the modifications to the guestOS. As well, not everything can be paravirtualized and as such, this limits the cases in which this method can be used. [9] Also, every guestOS must be modified in order to be used in paravirtualization. The modifications will differ in various OS and so, there is also the task of implementing these changes to make a guestOS compatible. [9]&lt;br /&gt;
&lt;br /&gt;
Instead of virtualizing the hardware and mediating between the VMM, dedicated devices allow mapping directly to the guestOS. [9] In this method, the device will use the guestOS’s drivers instead of the hostOS’s. [9] Using this method allows the guestOS to use the hardware to its full extent, without having to deal with the VMM. This then, simplifies the VMM by eliminating the overhead by virtualizating the hardware and handling the requests to devices. [9] However, there are limited physical resources to be dedicated to a guestOS. This also makes migration difficult as the guestOS is dependent on the physical device. [9]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;notes&#039;&#039;&#039;&lt;br /&gt;
- it ended up being quite lengthy. I mainly focused on the device virtualization rather than the architecture of a VM (like x86 virtualization). I&#039;ll put up my notes for the paper I found for virtualization. I didn&#039;t talk about Xen or VMware though. If any of that is needed, I can try to continue working on it tonight but I have another priority.&lt;br /&gt;
&lt;br /&gt;
-try focusing on the emulation side of VM where emulation&#039;s weaknesses vs direct hardware access or custom abstraction that exokernels -Slade&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L &lt;br /&gt;
&lt;br /&gt;
Paragraph 4 - Contrast/Compromise --[[User:Asoknack|Asoknack]]&lt;br /&gt;
&lt;br /&gt;
Conclusion - Jon S.   -  Only a sentence per paragraph, excluding Intro&lt;br /&gt;
&lt;br /&gt;
Sweet.  Looks like we got it covered.  We should read each others parts and put suggestions and edits. One of us should try and change it to one style if there are contradictions. And to put it on the main page.  We can figure that out tomorrow.  - Jon S&lt;br /&gt;
&lt;br /&gt;
Once the other parts are up and you see anything you know of as a good reference to back it up, put the link so we can use it. -Slade&lt;br /&gt;
&lt;br /&gt;
I made some edits to the first two paragraphs. I just reworded some of the unclear sentences and some grammatical errors. I&#039;ll work on editing more of it after comp 3007. Also when all the parts are up i can go through it and link the paragraphs together so it can be read more like an essay  --[[User:Aellebla|Aellebla]] 15:18, 14 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
So far so good, if you find some sentences that are off, go ahead and correct them, just note to us in here that you&#039;ve made changes. Almost done guys! -Slade&lt;br /&gt;
&lt;br /&gt;
Awesome Steph!  -JSlonosky&lt;br /&gt;
&lt;br /&gt;
==Potential Test Questions==&lt;br /&gt;
&lt;br /&gt;
Add potential test questions here:&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=User:Jslonosky&amp;diff=3810</id>
		<title>User:Jslonosky</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=User:Jslonosky&amp;diff=3810"/>
		<updated>2010-10-14T15:28:11Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: Created page with &amp;quot;The username of Jon Slonosky, computer science student at Carleton University.  Can be contacted at jslonosk@connect.carleton.ca&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The username of Jon Slonosky, computer science student at Carleton University.  Can be contacted at jslonosk@connect.carleton.ca&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3597</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3597"/>
		<updated>2010-10-14T04:15:59Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* The Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[8]&amp;lt;nowiki&amp;gt;Microkernel verses monolithic kernel&lt;br /&gt;
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf  - Roch&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I will site it/reference it better later&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
Corey Ling&lt;br /&gt;
Steph Lay&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
In Computer Science, the kernel is the component at the center of  the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. It is how the kernel goes about such management and its connections that we are comparing Exokernels to Microkernels and Virtual Machines. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels since by design, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it adds a layer of complexity within the system less efficient than a real machine as it accesses the hardware indirectly. It  can be observed how the exokernel provides low level hardware access and provide custom abstraction to those devices to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extend that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
The kernel is the most important part of an operating system.  Without the kernel, an operating system could not function.  &lt;br /&gt;
&lt;br /&gt;
A kernel is the lowest level section of an operating system.  It has the most privileges of the system.  It runs along side of the ‘user space’. It is in the ‘user space’ where a user has access and where the user can run its applications and libraries.[8]  This leaves the kernel with the need to manage the other necessary processes such as the File Systems and the process scheduling.  The kernel is layered with the most authoritative process on its lowest level.[8]  A monolithic kernel, a kernel that contains all mandatory processes within itself, was the common kernel type of the earlier versions of today’s operating systems utilized.  However, this architecture had problems. [8]  If the kernel needed to be updated with more code, or a fix for the system, the entire kernel would need to be compiled, and due to the amount of processes within it, it would take an inefficient amount of time.  This is where a microkernel becomes practical.&lt;br /&gt;
&lt;br /&gt;
The concept of a microkernel is reduce the code within the kernel, and it is only included in the kernel if it would affect the system in any way, for example for performance and efficiency reasons. [7] So, a microkernel is a kernel that has a reduced amount of mandatory software within itself.  This means that it contains less software that it has to manage, and has a reduced size.  A microkernel that emerged at the end of the 1980’s to the early 1990’s has the structure that processes like the File Systems and the Drivers are removed from it, leaving the kernel with process control and input/out control, and interrupts.  [8] This new structure makes the system much more modular, and easier to provide solutions.  If a driver must be patched or upgraded, the kernel does not need to be recompiled.  [7] The old driver can be removed, and while the device waits for the system to recognize it, the operating system replaces the driver.  This lets real-time updating, and it can be done while the computer is still functional.  This can reduce the complete crash of the system.  If a device fails, the kernel will not crash itself, like a monolithic kernel would.  The microkernel can reload the driver of the device that failed and continue functioning.  [7]  &lt;br /&gt;
&lt;br /&gt;
Want more on the scheduling?  I can do that if wanted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -Steph L.&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L&lt;br /&gt;
&lt;br /&gt;
Paragraph 4 - Contrast/Compromise --[[User:Asoknack|Asoknack]]&lt;br /&gt;
&lt;br /&gt;
Conclusion - Jon S.   -  Only a sentence per paragraph, excluding Intro&lt;br /&gt;
&lt;br /&gt;
Sweet.  Looks like we got it covered.  We should read each others parts and put suggestions and edits. One of us should try and change it to one style if there are contradictions. And to put it on the main page.  We can figure that out tomorrow.  - Jon S&lt;br /&gt;
&lt;br /&gt;
Once the other parts are up and you see anything you know of as a good reference to back it up, put the link so we can use it. -Slade&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3586</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3586"/>
		<updated>2010-10-14T03:41:33Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[8]&amp;lt;nowiki&amp;gt;Microkernel verses monolithic kernel&lt;br /&gt;
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf  - Roch&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I will site it/reference it better later&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
Corey Ling&lt;br /&gt;
Steph Lay&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
In Computer Science, the kernel is the component at the center of  the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. It is how the kernel goes about such management and its connections that we are comparing Exokernels to Microkernels and Virtual Machines. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels since by design, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it adds a layer of complexity within the system less efficient than a real machine as it accesses the hardware indirectly. It  can be observed how the exokernel provides low level hardware access and provide custom abstraction to those devices to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extend that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -Steph L.&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L&lt;br /&gt;
&lt;br /&gt;
Paragraph 4 - Contrast/Compromise --[[User:Asoknack|Asoknack]]&lt;br /&gt;
&lt;br /&gt;
Conclusion - Jon S.   -  Only a sentence per paragraph, excluding Intro&lt;br /&gt;
&lt;br /&gt;
Sweet.  Looks like we got it covered.  We should read each others parts and put suggestions and edits. One of us should try and change it to one style if there are contradictions. And to put it on the main page.  We can figure that out tomorrow.  - Jon S&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3584</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3584"/>
		<updated>2010-10-14T03:40:59Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[8]&amp;lt;nowiki&amp;gt;Microkernel verses monolithic kernel&lt;br /&gt;
http://www.vmars.tuwien.ac.at/courses/akti12/journal/04ss/article_04ss_Roch.pdf  - Ross&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
I will site it/reference it better later&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
Corey Ling&lt;br /&gt;
Steph Lay&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
In Computer Science, the kernel is the component at the center of  the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. It is how the kernel goes about such management and its connections that we are comparing Exokernels to Microkernels and Virtual Machines. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels since by design, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it adds a layer of complexity within the system less efficient than a real machine as it accesses the hardware indirectly. It  can be observed how the exokernel provides low level hardware access and provide custom abstraction to those devices to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extend that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -Steph L.&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L&lt;br /&gt;
&lt;br /&gt;
Paragraph 4 - Contrast/Compromise --[[User:Asoknack|Asoknack]]&lt;br /&gt;
&lt;br /&gt;
Conclusion - Jon S.   -  Only a sentence per paragraph, excluding Intro&lt;br /&gt;
&lt;br /&gt;
Sweet.  Looks like we got it covered.  We should read each others parts and put suggestions and edits. One of us should try and change it to one style if there are contradictions. And to put it on the main page.  We can figure that out tomorrow.  - Jon S&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3582</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3582"/>
		<updated>2010-10-14T03:24:21Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* The Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
Corey Ling&lt;br /&gt;
Steph Lay&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
In Computer Science, the kernel is the component at the center of  the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. It is how the kernel goes about such management and its connections that we are comparing Exokernels to Microkernels and Virtual Machines. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels since by design, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it adds a layer of complexity within the system less efficient than a real machine as it accesses the hardware indirectly. It  can be observed how the exokernel provides low level hardware access and provide custom abstraction to those devices to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extend that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -Steph L.&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L&lt;br /&gt;
&lt;br /&gt;
Paragraph 4 - Contrast/Compromise --[[User:Asoknack|Asoknack]]&lt;br /&gt;
&lt;br /&gt;
Conclusion - Jon S.   -  Only a sentence per paragraph, excluding Intro&lt;br /&gt;
&lt;br /&gt;
Sweet.  Looks like we got it covered.  We should read each others parts and put suggestions and edits. One of us should try and change it to one style if there are contradictions. And to put it on the main page.  We can figure that out tomorrow.  - Jon S&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3581</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3581"/>
		<updated>2010-10-14T03:21:55Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* The Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
Corey Ling&lt;br /&gt;
Steph Lay&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
In Computer Science, the kernel is the component at the center of  the majority of operating systems. The kernel is a bridge for applications to access the hardware level. It is responsible for managing the system&#039;s resources such as memory, disk storage, task management and networking. It is how the kernel goes about such management and its connections that we are comparing Exokernels to Microkernels and Virtual Machines. In the Exokernel conceptual model, we can see exokernels become much smaller than microkernels since by design, they are tiny and strive to keep functionality limited to protection and multiplexing of resources. The Virtual Machine Implementation of virtualizing all devices on the system may provide compatibility, but it adds a layer of complexity within the system less efficient than a real machine as it accesses the hardware indirectly. It  can be observed how the exokernel provides low level hardware access and provide custom abstraction to those devices to improve program performance as opposed to a VM&#039;s implementation. The exokernel concept has a design that can take the better concepts of microkernels and virtual machines to the extend that exokernels can be seen as a compromise between a virtual machine and a microkernel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -Steph L.&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L&lt;br /&gt;
&lt;br /&gt;
Paragraph 4 - Contrast/Compromise --[[User:Asoknack|Asoknack]]&lt;br /&gt;
&lt;br /&gt;
Conclusion - Jon S.   -  Only a sentence per paragraph, excluding Intro&lt;br /&gt;
&lt;br /&gt;
Sweet.  Looks like we got it covered.  We should read each others parts and put suggestions and edits. One of us should try and change it to one style if there are contradictions. And to put it on the main page.  We can figure that out tomorrow.  - Jon S&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3556</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3556"/>
		<updated>2010-10-14T02:37:18Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* The Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
Corey Ling&lt;br /&gt;
Steph Lay&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -Steph L.&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L&lt;br /&gt;
&lt;br /&gt;
Paragraph 4 - Contrast/Compromise&lt;br /&gt;
&lt;br /&gt;
Conclusion - Jon S.   -  Only a sentence per paragraph, excluding Intro&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3482</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3482"/>
		<updated>2010-10-14T00:49:24Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* The Essay */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
Corey Ling&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -unassigned&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L&lt;br /&gt;
&lt;br /&gt;
Conclusion - Jon S.   -  Only a sentence per paragraph, excluding Intro&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3472</id>
		<title>Talk:COMP 3000 Essay 1 2010 Question 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:COMP_3000_Essay_1_2010_Question_1&amp;diff=3472"/>
		<updated>2010-10-14T00:33:28Z</updated>

		<summary type="html">&lt;p&gt;Jslonosky: /* Unsorted */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Microkernel == &lt;br /&gt;
* Moving kernel functionality into processes contained in user space, e.g. file systems, drivers&lt;br /&gt;
* Keep basic functionality in kernel to handle sharing of resources&lt;br /&gt;
* Separation allows for manageability and security, corruption in one does not necessarily cause failure in system&lt;br /&gt;
* Large amount of moving from a process to Kernel to user space and back again, this is a costly operation.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Microkernel &#039;&#039;&#039;&lt;br /&gt;
* try&#039;s to minimize the amount of software that is mandatory or required [7]&lt;br /&gt;
advantages of Microkernel&lt;br /&gt;
* favors a modular system structure [7]&lt;br /&gt;
* one failure of a program does not impact any other programs [7]&lt;br /&gt;
* can support more than one api or strategies since all programs are separated [7]&lt;br /&gt;
==== Microkernel Concepts ==== &lt;br /&gt;
* piece of code is allowed in the kernel only if moving it outside the kernel would adversely affect the system. [7]&lt;br /&gt;
* any subsystem program created must be independent of all other subsystem&#039;s, any subsystem that is used can guarantee this from all other subsystems [7]&lt;br /&gt;
===== Address Space =====&lt;br /&gt;
* a mapping that relates the physical page to the virtual page. [7]&lt;br /&gt;
* processor specific [7]&lt;br /&gt;
* hide&#039;s the hardware&#039;s concept of address space [7]&lt;br /&gt;
* based off the idea of recursion each subsystem has it&#039;s own address space [7]&lt;br /&gt;
* the micro kernel provides 3 operations [7]&lt;br /&gt;
** Grant [7]&lt;br /&gt;
*** allows the owner to give a page to a recipient, provided the recipient want&#039;s it the page is removed from the owner&#039;s address space and but in the recipients. [7]&lt;br /&gt;
*** must be available to the owner. [7]&lt;br /&gt;
** Map [7]&lt;br /&gt;
*** allows the user to share a page with a recipient [7]&lt;br /&gt;
*** page is not removed from the owner&#039;s address space. [7]&lt;br /&gt;
** Flush [7]&lt;br /&gt;
*** remove&#039;s the page from all recipients address space [7]&lt;br /&gt;
*** how does this work with Grant --[[User:Asoknack|Asoknack]] 19:10, 12 October 2010 (UTC)&lt;br /&gt;
* allows memory management and paging out side the kernel&lt;br /&gt;
* Map and flush is required for memory manger&#039;s and pagers [7]&lt;br /&gt;
* can be used to implement access right&#039;s [7]&lt;br /&gt;
* controlling I/O Right&#039;s and driver&#039;s are not done at kernel level [7]&lt;br /&gt;
&lt;br /&gt;
===== Thread&#039;s IPC =====&lt;br /&gt;
* Threads&lt;br /&gt;
** in the kernel [7]&lt;br /&gt;
** Since a thread has an address space , all changes to the thread need to be done by the kernel [7]&lt;br /&gt;
* IPC [7]&lt;br /&gt;
** in the kernel IPC&lt;br /&gt;
** grant and map also need IPC  (So buye the priciple above this has to be in the kernel)[7]&lt;br /&gt;
** basic way for sub process to communicate. [7]&lt;br /&gt;
* Interrupts&lt;br /&gt;
** partially in the kernel [7]&lt;br /&gt;
** hard ware is a set of thread&#039;s which are empty except for there unique sender id [7]&lt;br /&gt;
** transformation of the message to the interrupt is done in the kernel [7]&lt;br /&gt;
** the kernel is not involved in device - specific interrupt&#039;s and does not understand the interrupt. [7]&lt;br /&gt;
*** resting the interrupt is done at user level [7]&lt;br /&gt;
** if a privileged command is need it is done implicitly the next time an IPC command is sent from the device [7]&lt;br /&gt;
&lt;br /&gt;
===== Unique Identifiers =====&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine ==&lt;br /&gt;
* Partitioning or virtualizing resources among OS virtualization running on top of host OS&lt;br /&gt;
* Virtualized OS believe running on full machine on its own&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
System Level Virtualization&lt;br /&gt;
&lt;br /&gt;
=== VMM ===&lt;br /&gt;
* stands for Virtual Machine Monitor, also known as the hyper-visor[4]&lt;br /&gt;
* responsible for virtualization of hardware(mapping physical to virtual) and the VM that run on top of the virtuallized hardware [4]&lt;br /&gt;
* usually a small os with no drivers , so it is coupled with a linux distro that provides device / hardware access [4]&lt;br /&gt;
** the os that the VMM is using for driver&#039;s is called the hostOS [6]&lt;br /&gt;
*the hostOS provides login and physical access to the hardware as well as management for the VMM [6]&lt;br /&gt;
=== VM ===&lt;br /&gt;
* the OS that the vm is running is called the guestOS [6]&lt;br /&gt;
* the guestOS only sees resources that have been allocated to the VM [6]&lt;br /&gt;
==== three approaches ====&lt;br /&gt;
*Type I virtualization [5]&lt;br /&gt;
** runs off the physical hardware [4]&lt;br /&gt;
** Isolation of the guestOs from the hardware is done threw processe level protection meachnism[6]&lt;br /&gt;
*** ring 0 = VMM [6]&lt;br /&gt;
*** ring 1 = VM [6]&lt;br /&gt;
*** this means all instructions from the VM must go threw the VMM [6]&lt;br /&gt;
** since there can be multiple VM&#039;s on a computer the scheduling is done by the VMM [6]&lt;br /&gt;
** on boot the VMM creates a hardware platform for the VM [6]&lt;br /&gt;
** load&#039;s the VM kernel into virtual memory and then boot&#039;s it like a regular computer [6]&lt;br /&gt;
** ex. Xen [4]&lt;br /&gt;
*Type II virtualization [5]&lt;br /&gt;
** run off the host Os [4]&lt;br /&gt;
** ex. VMware , QEMU [4]&lt;br /&gt;
* Para-virtualization [6]&lt;br /&gt;
** Similar to Type but use the HostOs for Device driver access [6]&lt;br /&gt;
** Provide a virtualization that is similar to hardware [From the paper posted, no citation yet]&lt;br /&gt;
** GuestOS and Hypervisor work together to improve performance&lt;br /&gt;
&lt;br /&gt;
== Exokernel ==&lt;br /&gt;
* Micro-kernel architecture with limited abstractions, ask for resource, get resource not resource abstraction&lt;br /&gt;
* Less functionality provided by kernel, security and handling of resource sharing&lt;br /&gt;
* Once application receives resource, it can use it as it wishes/in control&lt;br /&gt;
* Keep the basic kernel to handle allocating resources and sharing rather than developing straight to the hardware&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
* multiplex resources securely providing protection to mutual distrustful application threw the use of secure binding&#039;s[1]&lt;br /&gt;
* Goal of the exokernel is to give LibOS maximum freedom with out allowing them to interfere with each other. to do this the exokernel separates protection from management in doing this it provide 3 important tasks[1]&lt;br /&gt;
** tracking ownership of resources [1]&lt;br /&gt;
** ensuring protection by guarding all resource usage and binding points (not to shure what binding points are)[1]&lt;br /&gt;
** revoking access to the resources [1]&lt;br /&gt;
* LibrayOS (LibOs)&lt;br /&gt;
** Reduces the number of kernel crossings[1]&lt;br /&gt;
** Not trusted by the exokernel so can be trusted by the application , Example given is a bad parameter passed to the LibOs only the application is affected.[1] (So LibOs cant interact with kernel ???)&lt;br /&gt;
** Any application running on the Exokernel can change the LibrayOs freely [1]&lt;br /&gt;
** Application that use LibOS that implement standard interfaces (POSIX) will be portable on any system with the same interface [1]&lt;br /&gt;
** LibOs can be made portable if it is designed to interact with a low-level machine independent level to hide hardware details [1]&lt;br /&gt;
&lt;br /&gt;
=== Exokernel Design ===&lt;br /&gt;
==== Design Principles ====&lt;br /&gt;
*Securely Expose Hardware [1]&lt;br /&gt;
** an Exokernel tries to create low level primitives that the hardware resources can be accessed from, this also includes interrupts,exceptions [1]&lt;br /&gt;
** the exokernel also export privileged instructions to the LibOS so that traditional OS abstractions can be implemented (eg Process , address pace)[1]&lt;br /&gt;
** Exokernels should avoid resource management except when required protection ( allocation , revocation , ownership)[1]&lt;br /&gt;
** application based resource management is the best way to build flexible efficient flexible systems [1]&lt;br /&gt;
*Expose allocation[1]&lt;br /&gt;
** allow LibOs to request physical resources [1]&lt;br /&gt;
** resource allocation should not be automatic, the LibOS should participate in every single allocation decision [1]&lt;br /&gt;
*Expose Names[1]&lt;br /&gt;
** Use physical name&#039;s when ever possible[3] (not to sure what physical names are, I think it is as simple as what the hardware is called)--[[User:Asoknack|Asoknack]] 20:27, 9 October 2010 (UTC)&lt;br /&gt;
** Physical names capture useful information [3]&lt;br /&gt;
*** safer than and less resource intensive than virtual names as no translations are needed[3]&lt;br /&gt;
*Expose Revocation [1]&lt;br /&gt;
** use visible revocation protocol [1]&lt;br /&gt;
** allows well behaved LibOS to preform application level resource management [1]&lt;br /&gt;
** Visible revocation allows the LibOS to choose what instance of the resource to release[1](Visible means that when revocation happens the exokernel tell the LibOS that resource is being revoked)&lt;br /&gt;
&#039;&#039;&#039; Policy &#039;&#039;&#039;&lt;br /&gt;
* LibOS handle resource policy decisions&lt;br /&gt;
* Exokernels have a policy to decided between competing LibOS (Priority , share of resources)&lt;br /&gt;
** it enforces this threw allocation and deallocation (every thing can achieved threw this even what block to write and such)&lt;br /&gt;
&lt;br /&gt;
==== Secure Bindings ====&lt;br /&gt;
* Used by the exokernel to allow the LibOS to bind to resources [1]&lt;br /&gt;
* Allows the separation of protection and resource use [1]&lt;br /&gt;
* only checks authorization during bind time [1]&lt;br /&gt;
** Application&#039;s with complex needs for resources only authorized during bind.[1]&lt;br /&gt;
* access checking is done during access time and there is no need to understand complex resources needs during access[1]&lt;br /&gt;
** (this means that the exokernel checks once to make sure an application has authorization once approved, when the application tries to use the resource the exokernel is only concerned about policy conflict&#039;s)--[[User:Asoknack|Asoknack]] 18:20, 9 October 2010 (UTC)&lt;br /&gt;
** allows the kernel to protect the resources with out understanding what the resource is [1]&lt;br /&gt;
*three way&#039;s to implement&lt;br /&gt;
* Hardware Mechanisms [1]&lt;br /&gt;
* Software caching [1]&lt;br /&gt;
* Downloading application code [1]&lt;br /&gt;
&#039;&#039;&#039; Downloading Code to the Kernel &#039;&#039;&#039;&lt;br /&gt;
* used to implement secure bindings , and improve performance[1]&lt;br /&gt;
** eliminate the number of kernel crossings [1]&lt;br /&gt;
** downloaded code can be run with out the application to be scheduled [2]&lt;br /&gt;
==== Visible Resource Revocation ====&lt;br /&gt;
* Used for most resources [1]&lt;br /&gt;
** allows for LibOS to help with deallocation [1]&lt;br /&gt;
** LibOS are able to garner what resources are scare [1]&lt;br /&gt;
* Slower than Invisible as application involvement is required [1]&lt;br /&gt;
** ex of when invisible is used is Processor addressing-context identifiers [1]&lt;br /&gt;
==== Abort Protocol ====&lt;br /&gt;
* allows the exokernel to take resources away from the LibOS [1]&lt;br /&gt;
* used when the LibOS fails to respond to the revocation request [1]&lt;br /&gt;
* Exokernel must be careful not to delete as the LibOS might need to write some system critical data to the resource [1]&lt;br /&gt;
&lt;br /&gt;
== Comparisons  ==&lt;br /&gt;
====Exokernel/Microkernel====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Limited functionality in kernel&lt;br /&gt;
** functionality in kernel to handle sharing of resources and security&lt;br /&gt;
** avoids programming directly to hardware which creates a dependency&lt;br /&gt;
* Additional functionality provided in user space as processes&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Minimal abstractions provided by the kernel&lt;br /&gt;
** Applications given more power in exokernel&lt;br /&gt;
&lt;br /&gt;
====Exokernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Similarities&#039;&#039;&#039;&lt;br /&gt;
* Idea of partitioning resources between applications/OSs&lt;br /&gt;
* &amp;quot;Control&amp;quot; of resource given&lt;br /&gt;
* Isolation from other applications/OSs&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* Exokernel runs applications, VM runs OS&lt;br /&gt;
* VM uses a hostOS and guestOSs run on top&lt;br /&gt;
* Virtualization on VMs, Exokernel deals with real resources&lt;br /&gt;
* VM hides a lot of information because it emulates. Exokernel does not.&lt;br /&gt;
&lt;br /&gt;
====Microkernel/VM====&lt;br /&gt;
&#039;&#039;&#039;Differences&#039;&#039;&#039;&lt;br /&gt;
* With a virtual machine, you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System.&lt;br /&gt;
* This can be costly but the benefits are that it&#039;s easier and all the standard OS features are available.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1]&amp;lt;nowiki&amp;gt; Engler, D. R., Kaashoek, M. F., and O&#039;Toole, J. 1995. Exokernel: an operating system architecture for application-level resource management. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 251-266. DOI= http://doi.acm.org/10.1145/224056.224076 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]&amp;lt;nowiki&amp;gt;Engler, Dawson R. &amp;quot;The Exokernel Operating System Architecture.&amp;quot; Diss. Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998. Web. 9 Oct. 2010. &amp;lt;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.5054&amp;amp;rep=rep1&amp;amp;type=pdf&amp;gt;.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]&amp;lt;nowiki&amp;gt;Kaashoek, M. F., Engler, D. R., Ganger, G. R., Briceño, H. M., Hunt, R., Mazières, D., Pinckney, T., Grimm, R., Jannotti, J., and Mackenzie, K. 1997. Application performance and flexibility on exokernel systems. In Proceedings of the Sixteenth ACM Symposium on Operating Systems Principles  (Saint Malo, France, October 05 - 08, 1997). W. M. Waite, Ed. SOSP &#039;97. ACM, New York, NY, 52-65. DOI= http://doi.acm.org/10.1145/268998.266644 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[4]&amp;lt;nowiki&amp;gt;Vallee, G.; Naughton, T.; Engelmann, C.; Hong Ong; Scott, S.L.; , &amp;quot;System-Level Virtualization for High Performance Computing,&amp;quot; Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on , vol., no., pp.636-643, 13-15 Feb. 2008&lt;br /&gt;
DOI= http://doi.acm.org/10.1109/PDP.2008.85 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[5]&amp;lt;nowiki&amp;gt;Goldberg, R. P. 1973. Architecture of virtual machines. In Proceedings of the Workshop on Virtual Computer Systems  (Cambridge, Massachusetts, United States, March 26 - 27, 1973). ACM, New York, NY, 74-112. DOI= http://doi.acm.org/10.1145/800122.803950 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[6]&amp;lt;nowiki&amp;gt;Vallee, G., Naughton, T., and Scott, S. L. 2007. System management software for virtual environments. In Proceedings of the 4th international Conference on Computing Frontiers (Ischia, Italy, May 07 - 09, 2007). CF &#039;07. ACM, New York, NY, 153-160. DOI= http://doi.acm.org/10.1145/1242531.1242555 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[7]&amp;lt;nowiki&amp;gt;Liedtke, J. 1995. On micro-kernel construction. In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles  (Copper Mountain, Colorado, United States, December 03 - 06, 1995). M. B. Jones, Ed. SOSP &#039;95. ACM, New York, NY, 237-250. DOI= http://doi.acm.org/10.1145/224056.224075 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Unsorted ==&lt;br /&gt;
An overview of exokernels,virtual machines, microkernels *[http://www2.supchurch.org:10999/files/school/classes/CSCI4730/Lectures/grad-structures.ppt Overview](Power Point)&amp;lt;br&amp;gt;&lt;br /&gt;
Should not be used as a source but an overview.&lt;br /&gt;
&lt;br /&gt;
The original paper on [http://portal.acm.org/citation.cfm?id=224076 Exokernels] --[[User:Gautam|Gautam]] 22:39, 6 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Exokernel-&lt;br /&gt;
Minimalistic abstractions for developers&lt;br /&gt;
Exokernels can be seen as a good compromise between virtual machines and microkernels in the sense that exokernels can give that low level access to developers similar to direct access through a protected layer and at the same time can contain enough hardware abstraction to allow similar benefit of hiding the hardware resources to application programs.&lt;br /&gt;
Exokernel – fewest hardware abstractions to developer&lt;br /&gt;
Microkernel - is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system&lt;br /&gt;
Virtual machine is a simulation of any or devices requested by an application program&lt;br /&gt;
Exokenel – I’ve got a sound card&lt;br /&gt;
Virtual Machine – I’ve got the sound card you’re looking for, perfect virtual match&lt;br /&gt;
Microkernel – I’ve got sound card that plays Khazikstan sound format only&lt;br /&gt;
MicroKernel - Very small, very predictable, good for schedualing (QNX is a microkernel - POSIX compatable, benefits of running linux software like modern browsers) &lt;br /&gt;
&lt;br /&gt;
This is some ideas I&#039;ve got on this question, please contribute below&lt;br /&gt;
-Rovic&lt;br /&gt;
&lt;br /&gt;
Outlining some main features here as I see them.&lt;br /&gt;
&lt;br /&gt;
I found that the exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction. They have the same architecture with the basic functionality contained in the kernel to manage everyone. As the exokernel &amp;quot;gives&amp;quot; the resource to the application it can use the resource in isolation of other applications (until forced to shared) much like VMs receive their resources, either partitioned or virtualized, and execute as if its running on its own machine. There is this similar notion of partitioning the resources among applications/OS and allowing them to take control of what they have. &lt;br /&gt;
&lt;br /&gt;
I&#039;ll locate some references later on. --[[User:Slay|Slay]] 15:00, 7 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to post my answer for question 1 on the individuel assignment and hope it helps. --[[User:Aellebla|Aellebla]] 15:06, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
The design of the micro kernel was to take everything they could out of the Kernel and put it into a process. For ex, networking would be put into a process instead of staying in the kernel. The micro kernel dev&#039;s tried to keep lots of things in user space for efficiency. But one major problem with this is there would be a large amount of moving from a process to the kernel to user space and back again and this is a costly, non efficient process.It was an application specific OS, there was no multiplexing. With a virtual machine you are not virtualizing apps like with a microkernel but virtualizing an entire Operating System. This is very heavy however but the benefits are that it‟s easy and all the standard OS features are there whereas in a microkernel setup they would not all be there and this can be seen as a compromise.&lt;br /&gt;
&lt;br /&gt;
Exokernels can be seen as a compromise to virtual machines and microkernels because virtual machines emulate and exokernels do not. When you emulate something you hide a lot of the actual information because you wouldn‟t be able to see the „real‟ hardware. If we look at a virtual box setup running Linux, and we go look at all the hardware, it will be displayed as fake hardware.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maybe we can have an introduction - paragraph or so on each type - then similarities - differences - and the compromise.  I am going to do some research and writing this weekend and I will put some up  -- Jslonosky&lt;br /&gt;
&lt;br /&gt;
btw in my page (i guess you can call it that) i have some resources i have found  --[[User:Asoknack|Asoknack]] 15:50, 8 October 2010 (UTC)&lt;br /&gt;
- Wow, nice man. I will go ahead and write up the descriptive paragraphs on each kernel and virtual machine if no one minds. --Jslonosky&lt;br /&gt;
&lt;br /&gt;
I think we should divide up the paragraphs and proofread each others instead. (Are there only 4 of us?) I don&#039;t have much time to work on this today though but I&#039;ll try to work on it tomorrow morning. - Slay&lt;br /&gt;
&lt;br /&gt;
Sure guy.  That sounds good.  There should be 5 or 6 of us though.. . Oh well. Their loss.  I will do some before or after work today. Ill start with Microkernel since there is not a large amount of info here, and so we don&#039;t overlap each other - JSlonosky&lt;br /&gt;
&lt;br /&gt;
yeah i think there was more like 7 of us btw if any one has any more information feel free to add it would be nice if you add the references so that way citing is really easy on  acm.org it will auto give you the citation info (where it says Display Formats click on ACM Ref  and new window with the citation info auto pop&#039;s up) --[[User:Asoknack|Asoknack]] 02:28, 11 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
I added an outline of the similarities and differences. Add any more that I missed. These are from observations so I don&#039;t have any resources. -Slay&lt;br /&gt;
That&#039;s probably fine.  Our textbook probably outlines some of them, so I am sure we can find a few there - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Talked to the teacher today and for VM he said we should focus on the implementation such as Xen and VMware , he also said to talk about para virtualization --[[User:Asoknack|Asoknack]] 18:42, 12 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
A paper about emulation and paravirtualization [http://portal.acm.org/citation.cfm?id=1189289&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=105648137&amp;amp;CFTOKEN=47153176&amp;amp;ret=1#Fulltext link] - Slay&lt;br /&gt;
&lt;br /&gt;
Oh no big words.  Sorry about the Microkernels not done yet.  Working on an outline now.  Finally found how to access the ACM through carleton.  Gawd. &lt;br /&gt;
I am planning an outline, quick bit about kernels in general, (maybe mention monolith kernels?), and what microkernels do.&lt;br /&gt;
I see the microkernel outline info and a reference ( Whomever did that == hero: true) about the scheduling and the Memory management.  Should that be included in kernels in general and then mention what microkernels build upon/change? - JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sorry late to the party here. My mistake was not checking the discussion page when I checked in. I don&#039;t want to trample anyone&#039;s current work but I don&#039;t see any work on the final essay done. I would love to help just need to know where I can step in so as to not screw anyone else up. -- [[User:Cling|Cling]]&lt;br /&gt;
&lt;br /&gt;
I don&#039;t think I&#039;ll be able to write up something for the final essay, even though I suggested splitting it. I&#039;ll do research tonight though on the paravirtualization. If I find the time, I&#039;ll try to write something. Sorry about that. --[[User:Slay|Slay]] 21:52, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
We all have 3004 to do too, man.  I do not think anyone has chosen to do Virtual Machine section yet, or the Exokernel itself. But the contrast paragraph and the intro is chosen, and intro is done.  Microkernel and kernel will be done in a hour I hope. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
I can attempt to write up anything, the issue is I don&#039;t have any context on what to write, how do I tie it in to the rest of the essay? I only have a Japanese Quiz tomorrow morning then I should be good to write anything up for the rest of the day. As someone who has already written part of the essay, and assuming I attempt the exokernel section, how much do you think I should write? Should it just be about exokernel or should there be comparisons to the other topics? Thanks --[[User:Cling|Cling]] 23:14, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
Go with the Exokernel itself.  Slade is getting off work in a hour and we can double check what he is doing then.  We can put it together tomorrow sometime, and fill in the other stuff. - JSLonosky&lt;br /&gt;
&lt;br /&gt;
I&#039;ll attempt to work on VM tonight, then. I would feel so bad if I didn&#039;t write anything. -Slay&lt;br /&gt;
&lt;br /&gt;
Still wondering how much to write, I think we should decide on a decent word count or length so we don&#039;t have one short section (which would probably be mine) and/or one massive section that dwarfs all the others. If anyone has already written a section could you post your word count so we can aim to be around there, it would obviously be just a recommendation but it&#039;s just better to be on the safe side and have everything uniform. I haven&#039;t seen any formal requirements for the essay but I could be wrong, I also haven&#039;t been to class in a while. --[[User:Cling|Cling]] 23:33, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yeah Slay, VM probably doesnt have much to write about.  Get something down, and we can go over it.  CLing, Just write what you think.  There is not a lot to go over if I write kernel/microkernel well enough.  What is a exokernel?  exokernel was an even lower-level design than the microkernel, closer to the hardware without abstraction, basically (As said by Slade). I will probably end up with 500 or a bit more words. -- JSlonosky&lt;br /&gt;
&lt;br /&gt;
Sound off!&lt;br /&gt;
&lt;br /&gt;
Who&#039;s actually reading this? Add your name to the list...&lt;br /&gt;
&lt;br /&gt;
Rovic P.&lt;br /&gt;
Jon Slonosky&lt;br /&gt;
&lt;br /&gt;
== The Essay ==&lt;br /&gt;
&lt;br /&gt;
Let&#039;s actually breakdown the essay into components then write it here.&lt;br /&gt;
&lt;br /&gt;
I&#039;d like to go along the premise that microkernels and and virtual machines are &amp;quot;weaker&amp;quot; than exokernels in design for the essay. If anyone has any objections, add it here. &lt;br /&gt;
&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
 what do you mean by &amp;quot;weaker&amp;quot;(i think you mean exokernels&#039; takes the best of both worlds ) --[[User:Asoknack|Asoknack]] 02:45, 13 October 2010 (UTC)&lt;br /&gt;
&lt;br /&gt;
What I mean by weaker is that we should focus on the things microkernels and virtual machines may not do as well compared to a system based off an exokernel design and then focus on how an exokenenel can take the best of both worlds. Please choose which section you will work on, that&#039;s not to say it&#039;ll be the only part you do, but rather we&#039;ll all contribute to each part please. 1 day left.&lt;br /&gt;
-Slade&lt;br /&gt;
&lt;br /&gt;
...to the extent that exokernels be seen as a compromise between virtual machines and microkernels. &lt;br /&gt;
-I&#039;ll work on the initial intro. -Slade&lt;br /&gt;
&lt;br /&gt;
3 paragraphs that prove it&lt;br /&gt;
Explain how the key design characteristics of these three system architectures compare with each other. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
intro/thesis statement -Rovic P.&lt;br /&gt;
&lt;br /&gt;
Paragraph 1 -Microkernel -Jon S.&lt;br /&gt;
&lt;br /&gt;
Paragraph 2 -Virtual Machine -unassigned&lt;br /&gt;
&lt;br /&gt;
Paragraph 3 -Exokernel -Corey L&lt;br /&gt;
&lt;br /&gt;
Conclusion -unassigned&lt;/div&gt;</summary>
		<author><name>Jslonosky</name></author>
	</entry>
</feed>